GithubHelp home page GithubHelp logo

maxpert / marmot Goto Github PK

View Code? Open in Web Editor NEW
1.6K 15.0 40.0 861 KB

A distributed SQLite replicator built on top of NATS

Home Page: https://maxpert.github.io/marmot/

License: MIT License

Shell 0.42% Go 99.58%
database distributed nats-streaming raft-consensus-algorithm replication sqlite3

marmot's Introduction

Marmot

Go Report Card Discord GitHub

What & Why?

Marmot is a distributed SQLite replicator with leaderless, and eventual consistency. It allows you to build a robust replication between your nodes by building on top of fault-tolerant NATS JetStream.

So if you are running a read heavy website based on SQLite, you should be easily able to scale it out by adding more SQLite replicated nodes. SQLite is probably the most ubiquitous DB that exists almost everywhere, Marmot aims to make it even more ubiquitous for server side applications by building a replication layer on top.

Quick Start

Download latest Marmot and extract package using:

tar vxzf marmot-v*.tar.gz

From extracted directory run examples/run-cluster.sh. Make a change in /tmp/marmot-1.db using:

bash > sqlite3 /tmp/marmot-1.db
sqlite3 > INSERT INTO Books (title, author, publication_year) VALUES ('Pride and Prejudice', 'Jane Austen', 1813);

Now observe changes getting propagated to other database /tmp/marmot-2.db:

bash > sqlite3 /tmp/marmot-2.db
sqlite3 > SELECT * FROM Books;

You should be able to make changes interchangeably and see the changes getting propagated.

Out in wild

Here are some official, and community demos/usages showing Marmot out in wild:

What is the difference from others?

Marmot is essentially a CDC (Change Data Capture) and replication pipeline running top of NATS. It can automatically configure appropriate JetStreams making sure those streams evenly distribute load over those shards, so scaling simply boils down to adding more nodes, and re-balancing those JetStreams (auto rebalancing not implemented yet).

There are a few solutions like rqlite, dqlite, and LiteFS etc. All of them either are layers on top of SQLite (e.g. rqlite, dqlite) that requires them to sit in the middle with network layer in order to provide replication; or intercept physical page level writes to stream them off to replicas. In both cases they require a single primary node where all the writes have to go, and then these changes are applied to multiple readonly replicas.

Marmot on the other hand is born different. It's born to act as a side-car to your existing processes:

  • Instead of requiring single primary, there is no primary! Which means any node can make changes to its local DB. Marmot will use triggers to capture your changes, and then stream them off to NATS.
  • Instead of being strongly consistent, Marmot is eventually consistent. Which means no locking, or blocking of nodes.
  • It does not require any changes to your existing SQLite application logic for reading/writing.

Making these choices has multiple benefits:

  • You can read, and write to your SQLite database like you normally do. No extension, or VFS changes.
  • You can write on any node! You don't have to go to single primary for writing your data.
  • As long as you start with same copy of database, all the mutations will eventually converge (hence eventually consistent).

What happens when there is a race condition?

In Marmot every row is uniquely mapped to a JetStream. This guarantees that for any node to publish changes for a row it has to go through same JetStream as everyone else. If two nodes perform a change to same row in parallel, both of the nodes will compete to publish their change to JetStream cluster. Due to RAFT quorum constraint only one of the writer will be able to get its changes published first. Now as these changes are applied (even the publisher applies its own changes to database) the last writer will always win. This means there is NO serializability guarantee of a transaction spanning multiple tables. This is a design choice, in order to avoid any sort of global locking, and performance.

Stargazers over time

Stargazers over time

Limitations

Right now there are a few limitations on current solution:

  • Marmot does not support schema changes propagation, so any tables you create or columns you change won't be reflected. This feature is being debated and will be available in future versions of Marmot.
  • You can't watch tables selectively on a DB. This is due to various limitations around snapshot and restore mechanism.
  • WAL mode required - since your DB is going to be processed by multiple processes the only way to have multi-process changes reliably is via WAL.
  • Marmot is eventually consistent - This simply means rows can get synced out of order, and SERIALIZABLE assumptions on transactions might not hold true anymore. However your application can choose to redirect writes to single node so that your changes are always replayed in order.

Features

Eventually Consistent Leaderless Replication Fault Tolerant Built on NATS

  • Leaderless replication never requiring a single node to handle all write load.

  • Ability to snapshot and fully recover from those snapshots. Multiple storage options for snapshot:

    • NATS Blob Storage
    • WebDAV
    • SFTP
    • S3 Compatible:
      • AWS S3
      • Minio
      • Blackblaze
      • SeaweedFS
  • Built with NATS, abstracting stream distribution and replication.

  • Support for log entry compression, handling content heavy CMS needs.

  • Sleep timeout support for serverless scenarios.

Dependencies

Starting 0.8+ Marmot comes with embedded nats-server with JetStream support. This not only reduces the dependencies/processes that one might have to spin up, but also provides with out-of-box tooling like nat-cli. You can also use existing libraries to build additional tooling and scripts due to standard library support. Here is one example using Deno:

deno run --allow-net https://gist.githubusercontent.com/maxpert/d50a49dfb2f307b30b7cae841c9607e1/raw/6d30803c140b0ba602545c1c0878d3394be548c3/watch-marmot-change-logs.ts -u <nats_username> -p <nats_password> -s <comma_seperated_server_list>

The output will look something like this: image

Production status

  • v0.8.x introduced support for embedded NATS. This is recommended version for production.
  • v0.7.x moves to file based configuration rather than CLI flags, and S3 compatible snapshot storage.
  • v0.6.x introduces snapshot save/restore. It's in pre-production state.
  • v0.5.x introduces change log compression with zstd.
  • v0.4.x introduces NATS based change log streaming, and continuous multi-directional sync.
  • v0.3.x is deprecated, and unstable. DO NOT USE IT IN PRODUCTION.

CLI Documentation

Marmot picks simplicity, and lesser knobs to configure by choice. Here are command line options you can use to configure marmot:

  • config - Path to a TOML configuration file. Check out config.toml comments for detailed documentation on various configurable options.
  • cleanup (default: false) - Just cleanup and exit marmot. Useful for scenarios where you are performing a cleanup of hooks and change logs.
  • save-snapshot (default: false Since 0.6.x) - Just snapshot the local database, and upload snapshot to NATS/S3 server
  • cluster-addr (default: none Since 0.8.x) - Sets the binding address for cluster, when specifying this flag at-least two nodes will be required (or replication_log.replicas). It's a simple <bind_address>:<port> pair that can be used to bind cluster listening server.
    • Since v0.8.4 Marmot will automatically expose a leaf server on <bind_address>:<port + 1>. This is intended to reduce the number for flags. So if you expose cluster on port 4222 the port 4223 will be automatically a leaf server listener.
  • cluster-peers (default: none Since 0.8.x) - Comma separated list of nats://<host>:<port>/ peers of NATS cluster. You can also use (Since version v0.8.4 ) dns://<dns>:<port>/ to A/AAAA record lookups. Marmot will automatically resolve the DNS IPs at boot time to expand the routes with value of nats://<ip>:<port>/ value, where <ip> is replaced with all the DNS entries queried. There are two additional query parameters you can use:
    • min - forcing Marmot to wait for minimum number of entries (e.g. dns://foo:4222/?min=3 will require 3 DNS entries to be present before embedded NATs server is started)
    • interval_ms - delay between DNS queries, which will prevent Marmot from flooding DNS server.
  • leaf-server (default: none Since v0.8.4 )- Comma separated list of nats://<host>:<port>/ or dns://<dns>:<port>/ just like cluster-peers can be used to connect to a cluster as a leaf node.

For more details and internal workings of marmot go to these docs.

FAQs & Community

  • For FAQs visit this page
  • For community visit our discord or discussions on GitHub

Our sponsor

Last but not least we would like to thank our sponsors who have been supporting development of this project.

GoLand logo. JetBrains Logo (Main) logo.

marmot's People

Contributors

andrewoss avatar arnarg avatar insuusvenerati avatar maxpert avatar nclv avatar remram44 avatar tylergillson avatar wongfei2009 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

marmot's Issues

What is the best way to handle schema migrations?

Currently the triggers are only attached during the start phase of marmot. Individual schema changes (new tables / columns) are not reflected at runtime and require either a manual way to apply them on all DBs in the cluster + restarting each marmot node to update triggers and changelog tables.

Are there some recommendations how to approach it in a straightforward and maintainable way?

Best,
Roman

Inconsistent?

Regarding "synced out of order" in your last point in the readme:

Marmot is eventually consistent. This simply means rows can get synced out of order, and SERIALIZABLE assumptions on transactions might not hold true anymore.

Does this mean nodes could become inconsistent and stay inconsistent?

Unable to scan global changes error no such table __marmot___change_log_global

I tried the binaries at AMD64 0.7.4 and 0.7.5

4:03PM DBG Opening database path=???????????????????????????????????????/scavenger-beta.db
4:03PM DBG Forcing WAL checkpoint
4:03PM DBG Stream ready... leader=lily_nats name=eros-changes-c-1 replicas=1 shard=1
4:03PM INF Listing tables to watch...
4:03PM INF Starting change data capture pipeline...
4:03PM DBG Listening stream shard=1
4:03PM DBG duration=0.394929 name=scan_changes
4:03PM ERR Unable to scan global changes error="no such table: __marmot___change_log_global"

The db isn't open and no other errs are posted.

Crash: "double free or corruption (!prev)"

I've been getting consistent crashes on the master server (in a master node + 1 replica setup). Both are on the latest version from the realeases page. Uptime is inconsistent. Sometimes its up for a day then crashes and sometimes it crashes within a few hours.

They're using basic configs so I might be missing an important thing that I do not know about.

The replica server has been running fine with no crashes.

Configs:

Master (config-main.toml)

db_path="/home/tik/redis/videos-replica.v2.db"
seq_map_path="/tmp/videos-main.cbor"

node_id=1

publish=true
replicate=false

Replica (config-replica.toml)

db_path="/home/rep/tik/videos.v2.db"
seq_map_path="/tmp/videos-replica-1.cbor"

node_id=2

publish=false
replicate=true

Details

Each instance is ran through the command line like

# Master
./marmot -config config-main.toml -cluster-addr 10.1.0.12:4223 -cluster-peers 'nats://10.1.0.1:14222/'

# Replica
./marmot -config config-replica.toml -cluster-addr 10.1.0.1:14222 -cluster-peers 'nats://10.1.0.12:4223/'

The database that it's using is 1.8GB with 4 tables of which only 1 (videos_clean) is being updated frequently. The master database is a replica itself to keep it separate from the production one, a script pushes changes to it every minute.

Below is the output from the most recent crash.

marmot-v0.8.5-master-crashlog.txt

Use Nats-Expected-Last-Sequence etc. to manage competing updates more deterministically

In reading over some of the Nats docs I ran across a note re the two headers Nats uses to "enforce optimistic concurrency" on a stream level or a per-subject level. nats-io/nats-server#3772

Not sure if you're currently making use of these but perhaps they might help ensure that operations at least always replay in the proper order since those headers ensure you can't write to the stream if you're out of sequence.

Just a thought.

Love what you're doing here!

Simple Go example of how to use

I am sure I am just missing something, but I cannot figure out how to user marmot as my SQLIte DB. Example code with a connection string, creating a table, insert, ... would be great!

Snapshot RestoreFrom causing DB corruption

Right now RestoreFrom copies over database file but does not modify the -wal and -shm to ensure same reboot state. This will cause corruption of DB and wal files need to be deleted after restore snapshot for a cluster recovering from bootstate.

Raft groups should not share one state machine

It looks like a shared state machine is used for multiple Raft groups. This doesn't seem correct to me?

My understanding is that the Multi-Raft protocol does not guarantee commit order across different Raft groups. The user has to implement a custom distributed transaction layer (e.g. 2PC, or with some addition like Percolator or TrueTime) to establish partial ordering between transactions. TiKV uses Percolator, for example.

Build for mac OSX

First of all, thanks for the cool tool.
Do you consider build the executable file for mac OSX (amd64 and arm64)?

benthos and marmot seems like a bit of a good marriage

because marmot is a CDC based pattern, it would probably be a pretty good match with benthos.

https://github.com/benthosdev/benthos

benthos already is used in CDC patterns with cockroach for example.

so the idea is that a change to the DB gets sent to benthos, and then benthos can react.

Benthos has a NATS source / sink btw.
https://www.benthos.dev/docs/components/inputs/nats_jetstream/

Here is a simple example: https://github.com/davidandradeduarte/benthos-nats-jetstream-output-loop-bug

Another cool example: https://github.com/amirhnajafiz/jetstream-mirroring/

The use cases people would use this for a huge IMHO.
Maybe you want to initiate certain logic or secondary data transform to update your business analytics db or whatever you want.

Keystone alternative

Graphjin does almost exactly the same .

it’s golang
It’s postresql which I know is not appropriate for marmot but I think with a pg listener it can work with marmot if you’re interested.

web gui ?

Hey @maxpert

Was thinking that a HTMX based Web GUi might be a nice match for Marmot ? All golang of course..

what sort of things would be useful to expose ?

File Replication

Pocketbase supports files. Store the existence in SQLite and the actual on the File system.

We can use NATS object store with chunking to replicate massive files.

https://github.com/mprimi/nasefa for inspiration of the concept.

IPV6 Support

Doesn't seem to work with ipv6. That would be nice to have, and Go networking has pretty good support for it, in my experience. You have a lot of dependencies though, so hopefully there are no blockers.

Thanks for the useful project.

marmot transforms datetime fields

I use marmot alongside a python application to replicate my data to secondary/tertiary servers.

Within my database, I've got several sqlite datetime fields, such as: received DATETIME NOT NULL. My application stores dates in this field using ISO 8601 format: 2023‐08‐15T17:01:43+00:00. When marmot copies this field and sends it to a replica, the field is delivered as a unix epoch timestamp: 1692122503.

This wouldn't be an issue as python datetime can correctly parse this into the correct datetime, except that my sqlite queries which implement conditionals on this field do not work against epoch timestamps.

For example, this query does not return any results against fields who's timestamps have been transformed:

query = "SELECT * FROM table WHERE received BETWEEN  :from_date AND :to_date" 
params= {"from_date":  "2000-01-01 12:00:00+00:00", "to_date": "2030-12-31 23:59:59+00:00"}

I believe marmot should never transform customer data, and this is likely an oversight/assumption.

I'm not a Go dev, but it looks like this might be happening here in the sqlite3 driver you're using: https://github.com/mattn/go-sqlite3/blob/master/sqlite3.go#L2231-L2258. I understand that if the driver behaves this way, its not immediately in your control. But I'm raising this issue to you first since you probably don't want to damage customer data anymore than I do. I would both consider alternative drivers and also add additional unit tests which test that data is never mutated.

NATS TLS authentication

Are you open for support for TLS authentication in NATS being added?

This will require a SIGHUP (or any way of triggering a reload) handler that can re-read the certificate and key from the provided path.

Also it seems to be impossible to specify a custom CA certificate for the connection.

This could look something like this in the config:

[nats]
urls=["nats://localhost:4222"]
ca_file="/tmp/nats/ca.crt"
cert_file="/tmp/nats/client.crt"
key_file="/tmp/nats/client.key"

Unable to run example, possible issue with my version of sqlite

Hello, I searched your issues for a clue.
I couldn't find a reference to the working versions of sqlite.
If it helps I do know since 3.38 and 3.44 at some point they got stricter with certain things.
For instance here is a chat I had about ".dump" and single quotes and double quotes.

https://sqlite.org/forum/forumpost/2eeab88391e744e1

Example failed

  • Ran on mac with intel
  • Following your example.
  • Using marmot-v0.8.7-darwin-amd64.tar.gz
  • sqlite3 is 3.44.2 2023-11-24 11:41:44

Issue

Parse error: unsafe use of virtual table "pragma_function_list"

sqlite3 /tmp/marmot-1.db 
SQLite version 3.44.2 2023-11-24 11:41:44
Enter ".help" for usage hints.
sqlite> INSERT INTO Books (title, author, publication_year) VALUES ('Pride and Prejudice', 'Jane Austen', 1813);
Parse error: unsafe use of virtual table "pragma_function_list"

Log from running the cluster

./examples/run-cluster.sh
Created /tmp/marmot-1.db
Created /tmp/marmot-2.db
Created /tmp/marmot-3.db
2:40PM INF Starting nats-server from=nats node_id=1
2:40PM INF   Version:  2.10.4 from=nats node_id=1
2:40PM INF   Git:      [not set] from=nats node_id=1
2:40PM INF   Cluster:  e-marmot from=nats node_id=1
2:40PM INF   Name:     marmot-node-1 from=nats node_id=1
2:40PM INF   Node:     OWL7P9aV from=nats node_id=1
2:40PM INF   ID:       NA6IGSANNMR6TLA6GTW6NPEEQR5K5HFCTQWVLB6WGLHCXP24ZORPP77X from=nats node_id=1
2:40PM INF Starting JetStream from=nats node_id=1
2:40PM INF     _ ___ _____ ___ _____ ___ ___   _   __  __ from=nats node_id=1
2:40PM INF  _ | | __|_   _/ __|_   _| _ \ __| /_\ |  \/  | from=nats node_id=1
2:40PM INF | || | _|  | | \__ \ | | |   / _| / _ \| |\/| | from=nats node_id=1
2:40PM INF  \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_|  |_| from=nats node_id=1
2:40PM INF from=nats node_id=1
2:40PM INF          https://docs.nats.io/jetstream from=nats node_id=1
2:40PM INF from=nats node_id=1
2:40PM INF ---------------- JETSTREAM ---------------- from=nats node_id=1
2:40PM INF   Max Memory:      24.00 GB from=nats node_id=1
2:40PM INF   Max Storage:     133.71 GB from=nats node_id=1
2:40PM INF   Store Directory: "/tmp/nats/marmot-node-1/jetstream" from=nats node_id=1
2:40PM INF ------------------------------------------- from=nats node_id=1
2:40PM INF Starting JetStream cluster from=nats node_id=1
2:40PM INF Creating JetStream metadata controller from=nats node_id=1
2:40PM INF JetStream cluster bootstrapping from=nats node_id=1
2:40PM INF Listening for client connections on 0.0.0.0:56533 from=nats node_id=1
2:40PM INF Server is ready from=nats node_id=1
2:40PM INF Cluster name is e-marmot from=nats node_id=1
2:40PM INF Listening for route connections on localhost:4221 from=nats node_id=1
2:40PM WRN JetStream has not established contact with a meta leader from=nats node_id=1
2:40PM ERR Error trying to connect to route (attempt 1): dial tcp [::1]:4222: connect: connection refused from=nats node_id=1
2:40PM ERR Error trying to connect to route (attempt 1): dial tcp 127.0.0.1:4222: connect: connection refused from=nats node_id=1
2:40PM WRN Waiting for routing to be established... from=nats node_id=1
2:40PM INF Starting nats-server from=nats node_id=2
2:40PM INF   Version:  2.10.4 from=nats node_id=2
2:40PM INF   Git:      [not set] from=nats node_id=2
2:40PM INF   Cluster:  e-marmot from=nats node_id=2
2:40PM INF   Name:     marmot-node-2 from=nats node_id=2
2:40PM INF   Node:     aThtkTV0 from=nats node_id=2
2:40PM INF   ID:       NAH3JJIZ7WT4JVQN7TNACQR6E7BZG2SX675GWXNSYOUFE7ZGCCTDMH6F from=nats node_id=2
2:40PM INF Starting JetStream from=nats node_id=2
2:40PM INF     _ ___ _____ ___ _____ ___ ___   _   __  __ from=nats node_id=2
2:40PM INF  _ | | __|_   _/ __|_   _| _ \ __| /_\ |  \/  | from=nats node_id=2
2:40PM INF | || | _|  | | \__ \ | | |   / _| / _ \| |\/| | from=nats node_id=2
2:40PM INF  \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_|  |_| from=nats node_id=2
2:40PM INF from=nats node_id=2
2:40PM INF          https://docs.nats.io/jetstream from=nats node_id=2
2:40PM INF from=nats node_id=2
2:40PM INF ---------------- JETSTREAM ---------------- from=nats node_id=2
2:40PM INF   Max Memory:      24.00 GB from=nats node_id=2
2:40PM INF   Max Storage:     133.71 GB from=nats node_id=2
2:40PM INF   Store Directory: "/tmp/nats/marmot-node-2/jetstream" from=nats node_id=2
2:40PM INF ------------------------------------------- from=nats node_id=2
2:40PM INF Starting JetStream cluster from=nats node_id=2
2:40PM INF Creating JetStream metadata controller from=nats node_id=2
2:40PM INF JetStream cluster bootstrapping from=nats node_id=2
2:40PM INF Listening for client connections on 0.0.0.0:56538 from=nats node_id=2
2:40PM INF Server is ready from=nats node_id=2
2:40PM INF Cluster name is e-marmot from=nats node_id=2
2:40PM INF Listening for route connections on localhost:4222 from=nats node_id=2
2:40PM WRN JetStream has not established contact with a meta leader from=nats node_id=2
2:40PM ERR Error trying to connect to route (attempt 1): dial tcp [::1]:4221: connect: connection refused from=nats node_id=2
2:40PM INF 127.0.0.1:4221 - rid:7 - Route connection created from=nats node_id=2
2:40PM INF 127.0.0.1:56540 - rid:8 - Route connection created from=nats node_id=1
2:40PM WRN Waiting for routing to be established... from=nats node_id=2
2:40PM ERR NATS client disconnected node_id=1
2:40PM ERR NATS client exiting node_id=1
2:40PM INF Starting nats-server from=nats node_id=3
2:40PM INF   Version:  2.10.4 from=nats node_id=3
2:40PM INF   Git:      [not set] from=nats node_id=3
2:40PM INF   Cluster:  e-marmot from=nats node_id=3
2:40PM INF   Name:     marmot-node-3 from=nats node_id=3
2:40PM INF   Node:     1p7dAfNG from=nats node_id=3
2:40PM INF   ID:       NBOHQIIF2S4VEC4DHGXB3YPVL3BYBK6DEI3YW4ISXSZEAZIOJ4S3ZBJJ from=nats node_id=3
2:40PM INF Starting JetStream from=nats node_id=3
2:40PM INF     _ ___ _____ ___ _____ ___ ___   _   __  __ from=nats node_id=3
2:40PM INF  _ | | __|_   _/ __|_   _| _ \ __| /_\ |  \/  | from=nats node_id=3
2:40PM INF | || | _|  | | \__ \ | | |   / _| / _ \| |\/| | from=nats node_id=3
2:40PM INF  \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_|  |_| from=nats node_id=3
2:40PM INF from=nats node_id=3
2:40PM INF          https://docs.nats.io/jetstream from=nats node_id=3
2:40PM INF from=nats node_id=3
2:40PM INF ---------------- JETSTREAM ---------------- from=nats node_id=3
2:40PM INF   Max Memory:      24.00 GB from=nats node_id=3
2:40PM INF   Max Storage:     133.71 GB from=nats node_id=3
2:40PM INF   Store Directory: "/tmp/nats/marmot-node-3/jetstream" from=nats node_id=3
2:40PM INF ------------------------------------------- from=nats node_id=3
2:40PM INF Starting JetStream cluster from=nats node_id=3
2:40PM INF Creating JetStream metadata controller from=nats node_id=3
2:40PM INF JetStream cluster bootstrapping from=nats node_id=3
2:40PM INF Listening for client connections on 0.0.0.0:56546 from=nats node_id=3
2:40PM INF Server is ready from=nats node_id=3
2:40PM INF Cluster name is e-marmot from=nats node_id=3
2:40PM INF Listening for route connections on localhost:4223 from=nats node_id=3
2:40PM WRN JetStream has not established contact with a meta leader from=nats node_id=3
2:40PM ERR Error trying to connect to route (attempt 1): dial tcp [::1]:4221: connect: connection refused from=nats node_id=3
2:40PM ERR Error trying to connect to route (attempt 1): dial tcp [::1]:4222: connect: connection refused from=nats node_id=3
2:40PM INF 127.0.0.1:56549 - rid:9 - Route connection created from=nats node_id=1
2:40PM INF 127.0.0.1:4221 - rid:7 - Route connection created from=nats node_id=3
2:40PM INF 127.0.0.1:56550 - rid:9 - Route connection created from=nats node_id=2
2:40PM INF 127.0.0.1:4222 - rid:8 - Route connection created from=nats node_id=3
2:40PM ERR NATS client disconnected node_id=2
2:40PM ERR NATS client exiting node_id=2
2:40PM INF 127.0.0.1:4222 - rid:10 - Route connection created from=nats node_id=3
2:40PM INF 127.0.0.1:56551 - rid:10 - Route connection created from=nats node_id=2
2:40PM WRN Waiting for routing to be established... from=nats node_id=3
2:40PM INF 127.0.0.1:4222 - rid:11 - Route connection created from=nats node_id=3
2:40PM INF 127.0.0.1:56552 - rid:11 - Route connection created from=nats node_id=2
2:40PM INF 127.0.0.1:4222 - rid:10 - Route connection created from=nats node_id=1
2:40PM INF 127.0.0.1:56553 - rid:12 - Route connection created from=nats node_id=2
2:40PM INF 127.0.0.1:4222 - rid:11 - Route connection created from=nats node_id=1
2:40PM INF 127.0.0.1:56554 - rid:13 - Route connection created from=nats node_id=2
2:40PM INF 127.0.0.1:4222 - rid:10 - Router connection closed: Duplicate Route from=nats node_id=1
2:40PM INF 127.0.0.1:56553 - rid:12 - Router connection closed: Duplicate Route from=nats node_id=2
2:40PM WRN Waiting for routing to be established... from=nats node_id=1
2:40PM INF Self is new JetStream cluster metadata leader from=nats node_id=2
2:40PM INF JetStream cluster new metadata leader: marmot-node-2/e-marmot from=nats node_id=1
2:40PM INF Streaming ready... node_id=1
2:40PM INF Streaming ready... node_id=1
2:40PM INF JetStream cluster new stream leader for '$G > marmot-changes-c-1' from=nats node_id=1
2:40PM INF JetStream cluster new stream leader for '$G > KV_e-marmot' from=nats node_id=1
2:40PM INF Listing tables to watch... node_id=1
2:40PM INF Starting change data capture pipeline... node_id=1
2:40PM INF Creating global change log table node_id=1
2:40PM INF Creating trigger for Books node_id=1
2:40PM INF JetStream cluster new consumer leader for '$G > marmot-changes-c-1 > AF430TAJ' from=nats node_id=1
2:40PM ERR Error trying to connect to route (attempt 1): dial tcp [::1]:4222: connect: connection refused from=nats node_id=1
2:40PM INF 127.0.0.1:4221 - rid:14 - Route connection created from=nats node_id=2
2:40PM INF 127.0.0.1:56556 - rid:22 - Route connection created from=nats node_id=1
2:40PM ERR Error trying to connect to route (attempt 1): dial tcp [::1]:4221: connect: connection refused from=nats node_id=2
2:40PM INF Streaming ready... node_id=2
2:40PM INF Streaming ready... node_id=2
2:40PM INF Listing tables to watch... node_id=2
2:40PM INF Starting change data capture pipeline... node_id=2
2:40PM INF Creating global change log table node_id=2
2:40PM INF Creating trigger for Books node_id=2
2:40PM ERR NATS client disconnected node_id=3
2:40PM ERR NATS client exiting node_id=3
2:40PM INF JetStream cluster new consumer leader for '$G > marmot-changes-c-1 > 3GqGDqy2' from=nats node_id=1
2:40PM INF 127.0.0.1:4222 - rid:25 - Route connection created from=nats node_id=1
2:40PM INF 127.0.0.1:56560 - rid:17 - Route connection created from=nats node_id=2
2:40PM INF 127.0.0.1:4221 - rid:18 - Route connection created from=nats node_id=2
2:40PM INF 127.0.0.1:56563 - rid:26 - Route connection created from=nats node_id=1
2:40PM INF 127.0.0.1:4221 - rid:18 - Router connection closed: Duplicate Route from=nats node_id=2
2:40PM INF 127.0.0.1:56563 - rid:26 - Router connection closed: Client Closed from=nats node_id=1
2:40PM INF 127.0.0.1:4221 - rid:14 - Route connection created from=nats node_id=3
2:40PM INF 127.0.0.1:4222 - rid:13 - Route connection created from=nats node_id=3
2:40PM INF 127.0.0.1:56564 - rid:27 - Route connection created from=nats node_id=1
2:40PM INF 127.0.0.1:56565 - rid:19 - Route connection created from=nats node_id=2
2:40PM ERR NATS client disconnected node_id=3
2:40PM ERR NATS client exiting node_id=3
2:40PM INF 127.0.0.1:4221 - rid:15 - Route connection created from=nats node_id=3
2:40PM INF 127.0.0.1:56566 - rid:28 - Route connection created from=nats node_id=1
2:40PM INF 127.0.0.1:4221 - rid:16 - Route connection created from=nats node_id=3
2:40PM INF 127.0.0.1:56567 - rid:29 - Route connection created from=nats node_id=1
2:40PM INF Streaming ready... node_id=3
2:40PM INF Streaming ready... node_id=3
2:40PM INF Listing tables to watch... node_id=3
2:40PM INF Starting change data capture pipeline... node_id=3
2:40PM INF Creating global change log table node_id=3
2:40PM INF Creating trigger for Books node_id=3
2:40PM INF JetStream cluster new consumer leader for '$G > marmot-changes-c-1 > m0DR7kkv' from=nats node_id=1

Files managed using Marmot.

Pocketbase also does file uploads to local and S3.

I would like to brain storm the idea of using Marmot to make file uploads distributed without S3.

suggested Logic:

PB upload to local , just like it does to SQLite.

Fike system watcher sees the new file and then proceeds to replicate it to any other system instances , just like It does to SQLite.

advanced upload logic:

A user uploads from their device to their browser .
The browser then uploads in async chunks to the server.

Website needed

Just like rest of good open source projects we need a nice clean well designed static website.

How to get first snapshot to work

As a test, I have two machines with public IPs on the internet, both running pocketbase, nats-server, and marmot. If I create my admin user and collections on the first machine, how can I get the second machine to migrate over that initial set of data and then start replicating things in both directions after the initial snapshot migration?

No support for NATS connection retries

It would be useful to support configuration options for NATS connection retries in the event that a marmot follower is initialized slightly before its leader. In our use case, two hosts are provisioned simultaneously and the follower host occasionally lags the leader by as much as a few minutes.

Marmot follower error when attempting to connect prior to leader initialization:

Oct 17 00:40:04 two-node-two marmot[1693]: 12:40AM DBG Opening database node_id=2197861447266130575 path=/var/lib/rancher/k3s/server/db/state.db
Oct 17 00:40:04 two-node-two marmot[1693]: 12:40AM DBG Forcing WAL checkpoint node_id=2197861447266130575
Oct 17 00:40:07 two-node-two marmot[1693]: 12:40AM PNC Unable to initialize snapshot storage error="dial tcp X.X.X.X:4222: connect: no route to host" node_id=2197861447266130575
Oct 17 00:40:07 two-node-two marmot[1693]: panic: Unable to initialize snapshot storage
Oct 17 00:40:07 two-node-two marmot[1693]: goroutine 1 [running]:
Oct 17 00:40:07 two-node-two marmot[1693]: github.com/rs/zerolog/log.Panic.(*Logger).Panic.func1({0x1158d4c?, 0x0?})
Oct 17 00:40:07 two-node-two marmot[1693]:         /home/runner/go/pkg/mod/github.com/rs/[email protected]/log.go:376 +0x27
Oct 17 00:40:07 two-node-two marmot[1693]: github.com/rs/zerolog.(*Event).msg(0xc000282300, {0x1158d4c, 0x25})
Oct 17 00:40:07 two-node-two marmot[1693]:         /home/runner/go/pkg/mod/github.com/rs/[email protected]/event.go:156 +0x2c2
Oct 17 00:40:07 two-node-two marmot[1693]: github.com/rs/zerolog.(*Event).Msg(...)
Oct 17 00:40:07 two-node-two marmot[1693]:         /home/runner/go/pkg/mod/github.com/rs/[email protected]/event.go:108
Oct 17 00:40:07 two-node-two marmot[1693]: main.main()
Oct 17 00:40:07 two-node-two marmot[1693]:         /home/runner/work/marmot/marmot/marmot.go:66 +0x70a
Oct 17 00:40:07 two-node-two systemd[1]: marmot.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 17 00:40:07 two-node-two systemd[1]: marmot.service: Failed with result 'exit-code'.
Oct 17 00:40:07 two-node-two systemd[1]: marmot.service: Scheduled restart job, restart counter is at 2.
Oct 17 00:40:07 two-node-two systemd[1]: Stopped Marmot synchronizes the k8s state in SQLite between nodes in a two node topology.
Oct 17 00:40:07 two-node-two systemd[1]: Started Marmot synchronizes the k8s state in SQLite between nodes in a two node topology.
Oct 17 00:40:07 two-node-two marmot[1699]: 12:40AM DBG Opening database node_id=2197861447266130575 path=/var/lib/rancher/k3s/server/db/state.db
Oct 17 00:40:07 two-node-two marmot[1699]: 12:40AM DBG Forcing WAL checkpoint node_id=2197861447266130575
Oct 17 00:40:10 two-node-two marmot[1699]: 12:40AM PNC Unable to initialize snapshot storage error="dial tcp X.X.X.X:4222: connect: no route to host" node_id=2197861447266130575

CDC as a feature with multiple sinks

I've had a good look at what you've done with this project and I like it a lot. I've been thinking of ways to add CDC to SQLite3 and this abstraction is really nice and simple.

I've managed to "hack" in a CDC event pusher to a http endpoint as an example, but I don't know a lot about replication protocol and found that in multi nodes the CDC events are publish for N number of nodes.

Wondering if you are open to implementing CDC replicating functionality similar to Debezium. Namely the event dispatching to different sinks.

Consider cgo-free version of sqlite

Since this is a bit more involved in sqlite than most other projects, this might very well not be an option, but have you considered/tested using https://pkg.go.dev/modernc.org/sqlite instead of the standard github.com/mattn/go-sqlite3?

It is reportedly a bit slower since it's transpiled from c to non-optimized go but it has worked for my use-cases well. And building is a breeze :) (I actually could not build marmot without using a docker image on NixOS).

Are there any other cgo dependencies that I'm missing?

Snapshot storage for S3 compatible APIs

Right now we store snapshots inside NATS object store. While this might satisfy use-cases for some of users, we should support all S3 compatible storage layers for doing a save/restore of snapshots. Primary testing targets:

  • AWS S3
  • Minio
  • Seaweed
  • BlackBlaze

Roadmap

Could a basic roadmap be established ?

Would help with seeing where myself and others can dovetail on to help

Support for 2 nodes only?

Earlier versions of marmot didn't require JetStream. As far as I can tell, I can still connect to a regular nats server - I can see the connections - but it seems that sync isn't working.

Another question is:
If I understand this properly, writes from one database trigger inserts to a dedicated table which will then be read by marmot
and sent to nats. The question is - how often does marmot read changes, and is this configurable?

sql: Scan error on column index 4, name \"pk\": sql/driver: couldn't convert 2 into type bool

Trying to setup a example with a sufficiently complex DB: https://github.com/disarticulate/marmot/tree/master/example/single-leader to get a better idea of current NATS configuration.

Ran into the topic error. Not sure what the error is, but looks like pk refers to IsPrimaryKey bool db:"pk"in the queryquery := "SELECT name, type, notnull, dflt_value, pk FROM pragma_table_info(?)"` from here:
https://github.com/maxpert/marmot/blob/master/db/sqlite.go

	if !hasPrimaryKey {
		tableInfo = append(tableInfo, &ColumnInfo{
			Name:         "rowid",
			IsPrimaryKey: true,
			Type:         "INT",
			NotNull:      true,
			DefaultValue: nil,
		})
	}

this stackoverflow: https://stackoverflow.com/questions/10472103/sqlite-query-to-find-primary-keys indicates the references pk column is not a bool, but a count of the number of keys in the primary key.

I assume the fix is to evaluate (unfortunately) IsPrimary as an array rather than bool.

Sqlite Database is Locked is Coming, and Marmot is Crashing After Continusouly Inserting For Long time.

Hi, I am trying to set up an SQLite cluster using marmot, while writing to node1 with the rate of 500 records in 5 seconds, continuously
I am getting a database-locked error, am I missing something in the configuration, I have configured nodes with default configurations, that are mentioned in the repo.
If I increase the rate from 1000 per second then it's crashing also.

Insert Users Create Users Err &{0xc0000f2d00 INSERT INTO ip_usernames (endpoint, username, domain_name, last_updated_timestamp) VALUES (?, ?, ?, ?) <nil> {{0 0} 0 0 0 0} 0xc0004b4080 <nil> <nil> {0 0} true [] 0} Error is database is locked (5) (SQLITE_BUSY)
even after ingesting at a low rate, the database is getting locked.

Detecting successful/failed replication of a write

Marmot can undo a locally committed write if it conflicts with a write that was already replicated to the other nodes.

An application might prefer to only respond - possibly only for some of it's writes - once the write has been successfully replicated. For that Marmot would need to provide some mechanism for the application to detect when the write in a particular transaction has been successfully replicated/rejected.

I imagine the application choosing to do that would have to be prepared to handle at least the following cases:

  • Marmot confirming the write was successfully replicated,
  • Marmot confirming the write was rejected by the cluster,
  • Marmot not confirming anything in a time-frame acceptable to the application (ie, timeout),
  • Marmot being turned off.
    • I guess, depending on what mechanism is used for signalling, this might or might not be distinguishable from the timeout case.

Perf - 138.3 writes/sec is really slow....

In the README is says 138.3 writes/sec for the latest.

I really like Marmot but am the Perf in the README is alarming..

Can you qualify why it's so insanely slow maybe.

Was it doing cross DC replication or something else that slows it down so much ?

polling doesnt start until a change is detected

I setup polling due to inability to detect changes (running inside docker; probably can fix this).

With two marmot services, one leader and one follower. Polling (scan_changes) only starts after an insert/update is performed

Expected behavior: polling/scan_changes should start on bootup

Add a simple example. Can be a Marmot admin gui later

Use overmind to make it easy to run 3 instances to show it works and to bench it locally .

https://github.com/DarthSim/overmind

snapshot into NATS store so we don’t need S3 for demo. Less hassle and still HA. Add minio later if needed.

demo can use templ to make it easy to have a gui that updates reactively:

https://github.com/joerdav/go-htmx-examples

later this web gui approach can be used to provide a reactive web gui for managing marmot. As changes occur the web gui updates …

https://github.com/maxpert/marmot/releases/tag/v0.8.8-alpha.4 failure

Just kicking the tires on latest tagged release. It almost works...

The embedded NATS servers are not finding each other with errors like

my Makefile:

# https://github.com/maxpert/marmot

BIN=$(PWD)/.bin
export PATH:=$(PATH):$(BIN)


print:

init:
	rm -rf $(BIN)
	mkdir -p $(BIN)
	
dep: init
	curl -L https://github.com/maxpert/marmot/releases/download/v0.8.8-alpha.4/marmot-v0.8.8-alpha.4-darwin-arm64.tar.gz | tar -zx -C $(BIN)

start:
	chmod +x $(BIN)/marmot
	chmod +x $(BIN)/examples/run-cluster.sh
	cp $(BIN)/marmot $(BIN)/examples
	cd $(BIN)/examples && ./run-cluster.sh

And the output:

chmod +x /Users/apple/workspace/go/src/github.com/gedw99/kanka-cloudflare/modules/nats/examples/marmot/.bin/marmot
chmod +x /Users/apple/workspace/go/src/github.com/gedw99/kanka-cloudflare/modules/nats/examples/marmot/.bin/examples/run-cluster.sh
cp /Users/apple/workspace/go/src/github.com/gedw99/kanka-cloudflare/modules/nats/examples/marmot/.bin/marmot /Users/apple/workspace/go/src/github.com/gedw99/kanka-cloudflare/modules/nats/examples/marmot/.bin/examples
cd /Users/apple/workspace/go/src/github.com/gedw99/kanka-cloudflare/modules/nats/examples/marmot/.bin/examples && ./run-cluster.sh
Created /tmp/marmot-1.db
Created /tmp/marmot-2.db
Created /tmp/marmot-3.db
2:06PM INF Starting nats-server from=nats node_id=5973743519734446439
2:06PM INF   Version:  2.10.4 from=nats node_id=5973743519734446439
2:06PM INF   Git:      [not set] from=nats node_id=5973743519734446439
2:06PM INF   Cluster:  e-marmot from=nats node_id=5973743519734446439
2:06PM INF   Name:     marmot-node-5973743519734446439 from=nats node_id=5973743519734446439
2:06PM INF   Node:     16AHgXE3 from=nats node_id=5973743519734446439
2:06PM INF   ID:       NB4WCSC7ADZLDR4LNCZU6XGUSEFKZL5UFTEA3SBYL3WUMNTA2L3UMPLN from=nats node_id=5973743519734446439
2:06PM INF Starting JetStream from=nats node_id=5973743519734446439
2:06PM INF     _ ___ _____ ___ _____ ___ ___   _   __  __ from=nats node_id=5973743519734446439
2:06PM INF  _ | | __|_   _/ __|_   _| _ \ __| /_\ |  \/  | from=nats node_id=5973743519734446439
2:06PM INF | || | _|  | | \__ \ | | |   / _| / _ \| |\/| | from=nats node_id=5973743519734446439
2:06PM INF  \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_|  |_| from=nats node_id=5973743519734446439
2:06PM INF from=nats node_id=5973743519734446439
2:06PM INF          https://docs.nats.io/jetstream from=nats node_id=5973743519734446439
2:06PM INF from=nats node_id=5973743519734446439
2:06PM INF ---------------- JETSTREAM ---------------- from=nats node_id=5973743519734446439
2:06PM INF   Max Memory:      12.00 GB from=nats node_id=5973743519734446439
2:06PM INF   Max Storage:     137.21 GB from=nats node_id=5973743519734446439
2:06PM INF   Store Directory: "/var/folders/pj/n3sth0z55md7lydld97r8mmh0000gn/T/nats/marmot-node-5973743519734446439/jetstream" from=nats node_id=5973743519734446439
2:06PM INF ------------------------------------------- from=nats node_id=5973743519734446439
2:06PM INF Starting JetStream cluster from=nats node_id=5973743519734446439
2:06PM INF Creating JetStream metadata controller from=nats node_id=5973743519734446439
2:06PM INF JetStream cluster recovering state from=nats node_id=5973743519734446439
2:06PM INF Listening for client connections on 0.0.0.0:56401 from=nats node_id=5973743519734446439
2:06PM INF Server is ready from=nats node_id=5973743519734446439
2:06PM INF Cluster name is e-marmot from=nats node_id=5973743519734446439
2:06PM INF Listening for route connections on localhost:4221 from=nats node_id=5973743519734446439
2:06PM WRN JetStream has not established contact with a meta leader from=nats node_id=5973743519734446439
2:06PM ERR Error trying to connect to route (attempt 1): dial tcp [::1]:4222: connect: connection refused from=nats node_id=5973743519734446439
2:06PM ERR Error trying to connect to route (attempt 1): dial tcp [::1]:4222: connect: connection refused from=nats node_id=5973743519734446439
2:06PM WRN Waiting for routing to be established... from=nats node_id=5973743519734446439
2:06PM INF Starting nats-server from=nats node_id=5973743519734446439
2:06PM INF   Version:  2.10.4 from=nats node_id=5973743519734446439
2:06PM INF   Git:      [not set] from=nats node_id=5973743519734446439
2:06PM INF   Cluster:  e-marmot from=nats node_id=5973743519734446439
2:06PM INF   Name:     marmot-node-5973743519734446439 from=nats node_id=5973743519734446439
2:06PM INF   Node:     16AHgXE3 from=nats node_id=5973743519734446439
2:06PM INF   ID:       NCAAUCM4WY7EYU2E2F3YQGOKESPRDE2D65663G3PAQEOYKUG2SG6TVLU from=nats node_id=5973743519734446439
2:06PM INF Starting JetStream from=nats node_id=5973743519734446439
2:06PM INF     _ ___ _____ ___ _____ ___ ___   _   __  __ from=nats node_id=5973743519734446439
2:06PM INF  _ | | __|_   _/ __|_   _| _ \ __| /_\ |  \/  | from=nats node_id=5973743519734446439
2:06PM INF | || | _|  | | \__ \ | | |   / _| / _ \| |\/| | from=nats node_id=5973743519734446439
2:06PM INF  \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_|  |_| from=nats node_id=5973743519734446439
2:06PM INF from=nats node_id=5973743519734446439
2:06PM INF          https://docs.nats.io/jetstream from=nats node_id=5973743519734446439
2:06PM INF from=nats node_id=5973743519734446439
2:06PM INF ---------------- JETSTREAM ---------------- from=nats node_id=5973743519734446439
2:06PM INF   Max Memory:      12.00 GB from=nats node_id=5973743519734446439
2:06PM INF   Max Storage:     137.21 GB from=nats node_id=5973743519734446439
2:06PM INF   Store Directory: "/var/folders/pj/n3sth0z55md7lydld97r8mmh0000gn/T/nats/marmot-node-5973743519734446439/jetstream" from=nats node_id=5973743519734446439
2:06PM INF ------------------------------------------- from=nats node_id=5973743519734446439
2:06PM INF Starting JetStream cluster from=nats node_id=5973743519734446439
2:06PM INF Creating JetStream metadata controller from=nats node_id=5973743519734446439
2:06PM INF JetStream cluster recovering state from=nats node_id=5973743519734446439
2:06PM INF Listening for client connections on 0.0.0.0:56406 from=nats node_id=5973743519734446439
2:06PM INF Server is ready from=nats node_id=5973743519734446439
2:06PM INF Cluster name is e-marmot from=nats node_id=5973743519734446439
2:06PM INF Listening for route connections on localhost:4222 from=nats node_id=5973743519734446439
2:06PM WRN JetStream has not established contact with a meta leader from=nats node_id=5973743519734446439
2:06PM ERR Error trying to connect to route (attempt 1): dial tcp [::1]:4221: connect: connection refused from=nats node_id=5973743519734446439
2:06PM INF 127.0.0.1:4221 - rid:7 - Route connection created from=nats node_id=5973743519734446439
2:06PM INF 127.0.0.1:56407 - rid:8 - Route connection created from=nats node_id=5973743519734446439
2:06PM ERR 127.0.0.1:56407 - rid:8 - Remote server has a duplicate name: "marmot-node-5973743519734446439" from=nats node_id=5973743519734446439
2:06PM INF 127.0.0.1:56407 - rid:8 - Router connection closed: Duplicate Server Name from=nats node_id=5973743519734446439
2:06PM INF 127.0.0.1:4221 - rid:7 - Router connection closed: Client Closed from=nats node_id=5973743519734446439
2:06PM ERR Error trying to connect to route (attempt 1): dial tcp [::1]:4221: connect: connection refused from=nats node_id=5973743519734446439
2:06PM WRN Waiting for routing to be established... from=nats node_id=5973743519734446439
2:06PM ERR NATS client disconnected node_id=5973743519734446439
2:06PM ERR NATS client exiting node_id=5973743519734446439
^C./run-cluster.sh: line 32: kill: `': not a pid or valid job spec

Example FAILING to CONNECT !

Hi there, I love the project you have going on here. I am facing a problem when I try run the example . Here is a screenshot of whats going on.

Screenshot from 2023-08-31 15-01-31

Introduce testing suite

Following are required to be done:

  • Integration tests on CDC watching and publishing to NATS
  • Integration tests of reapplying change logs from NATS
  • Snapshot save/restore testing
  • SQLite Tcl level testing suite integration

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.