GithubHelp home page GithubHelp logo

apache / incubator-resilientdb Goto Github PK

View Code? Open in Web Editor NEW
112.0 15.0 195.0 288.66 MB

Global-Scale Sustainable Blockchain Fabric

Home Page: https://resilientdb.com/

License: Apache License 2.0

C++ 83.68% Shell 3.26% Python 1.20% Starlark 10.97% QMake 0.10% Solidity 0.08% JavaScript 0.45% Dockerfile 0.18% CSS 0.08%
blockchain crypto distributed-database distributed-ledger key-value-database smart-contracts utxo solidity blockchain-platform

incubator-resilientdb's People

Contributors

ataridreams avatar calvinkirs avatar cjcchen avatar dakaikang avatar glenn-chen avatar gopuman avatar ic4y avatar jbonofre avatar juduarte00 avatar kamaci avatar msadoghi avatar nobuginmycode avatar omahs avatar resilientdb avatar rohansogani avatar saipranav-kotamreddy avatar sajjadrahnama avatar xyhlinx avatar yuhaoran1214 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

incubator-resilientdb's Issues

Run in a geo-distributed way

Hi authors,

How to run your code on more than 1 cluster? For now, if I use ./resilientDB-docker --clients=1 --replicas=4 then I have 1 cluster wit 4 replicas. But I want 2 clusters with 4 replicas in each. I think there must be some configuration to be done because ./resilientDB-docker --clients=1 --replicas=8 wont split the replicas in 2 clusters.

Appreciate your help!

memcpy error when trying to run script without docker

Tried copy-initalization instead of memcpy, but does not work because one variable is a char and the other is a RemReqType. Should this memcpy error just be ignored?
transport/message.cpp: In member function ‘virtual void YCSBClientQueryMessage::copy_from_buf(char*)’: ./system/helper.h:206:29: error: ‘void* memcpy(void*, const void*, size_t)’ writing to an object of non-trivially copyable type ‘class ycsb_request’; use copy-assignment or copy-initialization instead [-Werror=class-memaccess] memcpy(&v, &d[p], sizeof(v)); \

Client Memory

Clients don't release received responses from the nodes and memory is getting leaked.

New Feature: Support Light-weight Read-only Transaction Design

In the current version of ResilientDB, "get" transactions (i.e., read-only transactions) retrieve the values of some keys. To guarantee data consistency, these transactions are forced through the consensus layer to obtain the correct results. Due to the nature of consensus design, each transaction will be written to the chain and written to the disk to support durability and recovery.

To improve read-only transactions, we can explore alternative designs without the need to participate in consensus yet retrieve consistent results.

Here are a set of important considerations:

  1. How to identify read-only transactions?
  2. Should a fee be paid to process read-only transactions, if so, then read-only transactions will require payment, which must pass through consensus.
  3. How to ensure data consistency without engaging in consensus. Without consensus, each replica will return different data. How to verify the data is consistent? Could we introduce read-only caching services (e.g., Coordination-Free Byzantine Replication with Minimal Communication Costs, ICDT'20)? Can we relax the need to provide the latest data, instead maybe we can support snapshot reads, where each query from the same client returns the same version of the data, i.e., repeatable.

New feature: Support Dynamic Network Re-configuration

The current release version or the version from the master branch does not support reconfiguration. The number of replicas to be deployed is determined by the configuration when deploying to the cluster.

We need to support adding and removing replicas when resilientDB is running in the cluster after deployment.

ResilientDB Fails to run when clients > 1

@gupta-suyash @sajjadrahnama
./resilientDB-docker --clients=2 --replicas=4
The ./resilientDB-docker fails because the ifconfig file is not created properly. The reason being, multiple clients are added in the same line separated by \n. There is a small bug in the docker-ifconfig.sh file.

I've fixed this small bug in my local copy.

Docker setup script very limiting

First of all, it seems the app is only compiled on one of the servers:

docker exec s1 mkdir -p obj
docker exec s1 make clean
docker exec s1 make

Instead on all, which I do not understand.

Second, a lot of the code connects to the containers and executes commands instead of putting a startscript in the Dockerfile which is then executed after startup. This makes a multi-machine setup with swarm very difficult.

In that sense, the repo is also missing the actual Dockerfile to allow people to setup this kind of thing easily for themselvs.

what are the public/private keys used for?

Shouldn't these files be generated uniquely by the build as opposed to having hardcoded files checked in.
Couldn't it be a security issue if someone deploys your app and uses these keys or certs?

It is not ideal to include any binary files in a release of an Apache project. Reviewers will find them and start asking why they are there. You cannot include compiled artifacts in a source release so ASF contributors are looking for binary files to see if there is anything untoward.

./scripts/deploy/data/cert/node6.key.key.pri
./scripts/deploy/data/cert/node6.key.key.pub
./scripts/deploy/data/cert/node6.key.pri
./scripts/deploy/data/cert/node6.key.pub
./scripts/deploy/data/cert/node7.key.key.pri
./scripts/deploy/data/cert/node7.key.key.pub
./scripts/deploy/data/cert/node7.key.pri
./scripts/deploy/data/cert/node7.key.pub
./scripts/deploy/data/cert/node8.key.key.pri
./scripts/deploy/data/cert/node8.key.key.pub
./scripts/deploy/data/cert/node8.key.pri
./scripts/deploy/data/cert/node8.key.pub
./scripts/deploy/data/cert/node9.key.key.pri
./scripts/deploy/data/cert/node9.key.key.pub
./scripts/deploy/data/cert/node9.key.pri
./scripts/deploy/data/cert/node9.key.pub
./service/tools/data/cert/node6.key.key.pri
./service/tools/data/cert/node6.key.key.pub
./service/tools/data/cert/node6.key.pri
./service/tools/data/cert/node6.key.pub
./service/tools/data/cert/node7.key.key.pri
./service/tools/data/cert/node7.key.key.pub
./service/tools/data/cert/node7.key.pri
./service/tools/data/cert/node7.key.pub
./service/tools/data/cert/node8.key.key.pri
./service/tools/data/cert/node8.key.key.pub
./service/tools/data/cert/node8.key.pri
./service/tools/data/cert/node8.key.pub
./service/tools/data/cert/node9.key.key.pri
./service/tools/data/cert/node9.key.key.pub
./service/tools/data/cert/node9.key.pri
./service/tools/data/cert/node9.key.pub

Bug: View Change Stopping Liveness

When forcefully inducing view change on the Resilient DB system, there are 2 issues which can happen currently:

  1. After the view change occurs, a replica does not continue the transaction it was performing when the view change occurred
  2. After the view change occurs, a replica is unable to receive/recognize transaction messages being sent towards it

From testing, type 1 is most likely an issue with when the view change messages are received and occurs it interrupts some process which was midway and the program is unsure where to continue. This problem is fixed if start_kv_service.sh is reran, so is likely a runtime issue with timing

For type 2, this is most likely an issue with the ports or memory, as what I have noticed is when the view change runs into type 2, whenever I try a view change on that computer session following that, it always results in a type 2 error. Meanwhile, if the view change works fine the first time, all subsequent star_kv_service.sh runs for that computer session(when the code remains unchanged) avoid the type 2 issue.

Theres also an issue where sometimes after a view change is done, 30+ prepare and commit messages are collected for the next transaction and transactions which were never sent are logged, resulting in a higher number executed count and prepare messages collected count than what should be happening

large number of files with no Apache source header

This can easily lead to -1s on release votes

Found with Apache Rat. https://creadur.apache.org/rat/

./.bazelrc
./.bazelversion
./.clang-format
./.gitignore
./.licenserc.yaml
./CHANGELOG.md
./CNAME
./CODE_OF_CONDUCT.md
./DISCLAIMER-WIP
./README.md
./WORKSPACE
./bazel.gpg
./repositories.bzl
./Docker/Dockerfile
./Docker/Dockerfile_mac
./api/README.md
./api/ip_address.config
./documents/doxygen/.gitignore
./documents/doxygen/Doxyfile
./documents/doxygen/DoxygenLayout.xml
./documents/doxygen/doxygen_html_style.css
./documents/doxygen/header
./documents/file/prometheus.yml
./monitoring/README.md
./monitoring/prometheus/prometheus.yml
./platform/consensus/ordering/geo_pbft/README.md
./platform/consensus/ordering/pbft/README.md
./platform/networkstrate/README.md
./platform/statistic/README.md
./platform/test/test_data/kv_config.config
./platform/test/test_data/server.config
./scripts/deploy/README.md
./scripts/deploy/config/key_example.conf
./scripts/deploy/config/kv_performance_server.conf
./scripts/deploy/config/kv_server.conf
./scripts/deploy/config/poe.config
./scripts/deploy/config/template.config
./scripts/deploy/data/cert/node6.key.key.pri
./scripts/deploy/data/cert/node6.key.key.pub
./scripts/deploy/data/cert/node6.key.pri
./scripts/deploy/data/cert/node6.key.pub
./scripts/deploy/data/cert/node7.key.key.pri
./scripts/deploy/data/cert/node7.key.key.pub
./scripts/deploy/data/cert/node7.key.pri
./scripts/deploy/data/cert/node7.key.pub
./scripts/deploy/data/cert/node8.key.key.pri
./scripts/deploy/data/cert/node8.key.key.pub
./scripts/deploy/data/cert/node8.key.pri
./scripts/deploy/data/cert/node8.key.pub
./scripts/deploy/data/cert/node9.key.key.pri
./scripts/deploy/data/cert/node9.key.key.pub
./scripts/deploy/data/cert/node9.key.pri
./scripts/deploy/data/cert/node9.key.pub
./service/tools/config/interface/service.config
./service/tools/config/interface/service0.config
./service/tools/config/server/server.config
./service/tools/config/server/utxo_config.config
./service/tools/contract/README.md
./service/tools/contract/api_tools/client_config.config
./service/tools/contract/api_tools/config/server_config.config
./service/tools/contract/api_tools/example_contract/token.sol
./service/tools/data/cert/node6.key.key.pri
./service/tools/data/cert/node6.key.key.pub
./service/tools/data/cert/node6.key.pri
./service/tools/data/cert/node6.key.pub
./service/tools/data/cert/node7.key.key.pri
./service/tools/data/cert/node7.key.key.pub
./service/tools/data/cert/node7.key.pri
./service/tools/data/cert/node7.key.pub
./service/tools/data/cert/node8.key.key.pri
./service/tools/data/cert/node8.key.key.pub
./service/tools/data/cert/node8.key.pri
./service/tools/data/cert/node8.key.pub
./service/tools/data/cert/node9.key.key.pri
./service/tools/data/cert/node9.key.key.pub
./service/tools/data/cert/node9.key.pri
./service/tools/data/cert/node9.key.pub
./service/tools/utxo/README.md
./service/tools/utxo/wallet_tool/cpp/client_config.config
./service/tools/utxo/wallet_tool/cpp/server_config0.config
./service/utxo/config/server_config.config
./service/utxo/config/utxo_config.config
./third_party/asio.BUILD
./third_party/civetweb.BUILD
./third_party/crow.BUILD
./third_party/date.BUILD
./third_party/eEVM.BUILD
./third_party/json.BUILD
./third_party/leveldb.BUILD
./third_party/prometheus.BUILD
./third_party/rapidjson.BUILD
./third_party/snappy.BUILD
./third_party/z.BUILD
./third_party/zlib.BUILD
./third_party/loc_script/action.yml
./third_party/loc_script/src/index.js

How to test the performance of PoE and PBFT

I have tried to run ./performance/poe_performance.sh config/kv_performance_server.conf but cannot get the results, the results shown below: getting results
scp -i /home/ubuntu/.ssh/guigu_bft.pem [email protected]:/home/ubuntu/.log ./172.26.0.3_log
scp: /home/ubuntu/.log: No such file or directory
scp -i /home/ubuntu/.ssh/guigu_bft.pem [email protected]:/home/ubuntu/.log ./172.26.0.3_log
scp: /home/ubuntu/.log: No such file or directory
scp -i /home/ubuntu/.ssh/guigu_bft.pem [email protected]:/home/ubuntu/.log ./172.26.0.3_log
scp: /home/ubuntu/.log: No such file or directory
scp -i /home/ubuntu/.ssh/guigu_bft.pem [email protected]:/home/ubuntu/.log ./172.26.0.3_log
scp: /home/ubuntu/.log: No such file or directory
scp -i /home/ubuntu/.ssh/guigu_bft.pem [email protected]:/home/ubuntu/.log ./172.26.0.3_log
scp: /home/ubuntu/.log: No such file or directory
ls: cannot access 'result_*_log': No such file or directory
Traceback (most recent call last):
File "performance/calculate_result.py", line 70, in
cal_tps(tps)
File "performance/calculate_result.py", line 44, in cal_tps
print("average throughput:",sum(tps_sum)/len(tps_sum))
ZeroDivisionError: division by zero
save result to results.log
calculate results, number of nodes: 0
max throughput: 0

Bug: Core dump when long connect is killed

When the long connect is killed, boost library does not return an error but return success with an random number.
Close the connection if reading an invalid package.

./config/resdb_config_utils.h:43:5: error: 'std::optional' has not been declared

when run example/start_kv_server.sh after ./INSTALL.sh, the complier complains this:

In file included from config/resdb_config_utils.cpp:26:
./config/resdb_config_utils.h:43:5: error: 'std::optional' has not been declared
   43 |     std::optional<ReplicaInfo> self_info = std::nullopt,
      |     ^~~
./config/resdb_config_utils.h:43:18: error: expected ',' or '...' before '<' token
   43 |     std::optional<ReplicaInfo> self_info = std::nullopt,
      |                  ^
config/resdb_config_utils.cpp:133:35: error: 'std::optional' has not been declared
  133 |     const std::string& cert_file, std::optional<ReplicaInfo> self_info,
      |                                   ^~~
config/resdb_config_utils.cpp:133:48: error: expected ',' or '...' before '<' token
  133 |     const std::string& cert_file, std::optional<ReplicaInfo> self_info,
      |                                                ^
config/resdb_config_utils.cpp: In function 'std::unique_ptr<resdb::ResDBConfig> resdb::GenerateResDBConfig(const string&, const string&, const string&, int)':
config/resdb_config_utils.cpp:141:8: error: 'self_info' was not declared in this scope; did you mean 'cert_info'?
  141 |   if (!self_info.has_value()) {
      |        ^~~~~~~~~
      |        cert_info
config/resdb_config_utils.cpp:145:5: error: 'self_info' was not declared in this scope; did you mean 'cert_info'?
  145 |   (*self_info).set_id(cert_info.public_key().public_key_info().node_id());
      |     ^~~~~~~~~
      |     cert_info
config/resdb_config_utils.cpp:151:7: error: 'gen_func' was not declared in this scope
  151 |   if (gen_func.has_value()) {
      |       ^~~~~~~~
Target //kv_server:kv_server failed to build

Web Dashboard - UI

Add a web interface for interacting with resilientdb on google cloud machines.

What's the DB type (in-memory or SQLite) used in the experiments of VLDB publication

Dear authors of ResilientDB,

Your VLDB publication ResilientDB: Global Scale Resilient Blockchain Fabric was an excellent one, and I really appreciate that your team kindly open source this great work.

Sorry about spamming the issues of this repo, but may I know which DB (in_memory_db or Sqlite) was used in the experiments of the paper?

Thanks!

LICENSE has incorrect third party information

The LICENSE should be the standard Apache License but if you have 3rd party source code included in your code base, you must also list the affected files. You must provide the licensing information about that 3rd party source too.

Examples:
https://github.com/apache/fury/blob/main/LICENSE
https://github.com/apache/spark/blob/master/LICENSE

Your LICENSE file lists 3rd party libraries. Those should not be in your LICENSE file. Projects sometimes have binary artifacts that they release with the source releases. These binary artifacts should have LICENSE files and they will typically have the standard Apache License plus a list of the 3rd party libs that are in the binary artifact with their license details. Many projects put this sort of info in a file called LICENSE-binary. ResilientDB does not appear to be releasing binary artifacts at the moment so this is not strictly needed. But the info is useful for people you download your source release because it tells them what to expect when they run the install script. So creating a LICENSE-binary may still be a useful thing to do. Your currently incorrect LICENSE file may be a good starting point for creating a LICENSE-binary file.

Example:
https://github.com/apache/spark/blob/master/LICENSE-binary

Cannot start service X: Ports are not available: unable to list exposed ports

Hi @sajjadrahnama I was having issues with Docker. I am using Docker on WSL2 on Windows Build 21343.

I get errors when I invoke:
./resilientDB-docker -d

Output:

Number of Replicas:     4
Number of Clients:      1
Stopping previous containers...
Removing c4 ... done
Removing c2 ... done
Removing c1 ... done
Removing s3 ... done
Removing s2 ... done
Removing c3 ... done
Removing s5 ... done
Removing s1 ... done
Removing s4 ... done
Removing s6 ... done
Removing network resilientdb_default
Successfully stopped
Creating docker compose file ...
Docker compose file created --> docker-compose.yml
Starting the containers...
Creating network "resilientdb_default" with the default driver
Creating s1 ...
Creating s4 ... error
Creating s3 ...
Creating c1 ...
Creating s2 ...

ERROR: for s4  Cannot start service s4: Ports are not available: unable to list exposed ports: Get "http://unix/forwCreating s2 ... error

Creating s1 ... error
ards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for c1  Cannot start service c1: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused
Creating s3 ... error                                                                                               ERROR: for s1  Cannot start service s1: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for s3  Cannot start service s3: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for s4  Cannot start service s4: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for s2  Cannot start service s2: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for c1  Cannot start service c1: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for s1  Cannot start service s1: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused

ERROR: for s3  Cannot start service s3: Ports are not available: unable to list exposed ports: Get "http://unix/forwards/list": dial unix /mnt/wsl/docker-desktop/shared-sockets/host-services/backend.sock: connect: connection refused
ERROR: Encountered errors while bringing up the project.
ifconfig file exists... Deleting File
Deleted
Server sequence --> IP
Put Client IP at the bottom
ifconfig.txt Created!

Checking Dependencies...
/mnt/d/programs/ExpoLab/ResilientDB
Dependencies has been installed

Creating config file...
Config file has been created

Compiling ResilientDB...
Error response from daemon: Container f82fc2862216fcf5a8f53692034785b11e5f3cb709ce37c0aec0b94e5b91c870 is not running
Error response from daemon: Container f82fc2862216fcf5a8f53692034785b11e5f3cb709ce37c0aec0b94e5b91c870 is not running
Error response from daemon: Container f82fc2862216fcf5a8f53692034785b11e5f3cb709ce37c0aec0b94e5b91c870 is not running
ResilientDB is compiled successfully

Running ResilientDB replicas...
rep 1
Error response from daemon: Container f82fc2862216fcf5a8f53692034785b11e5f3cb709ce37c0aec0b94e5b91c870 is not running
rep 2
Error response from daemon: Container b3f1a934d0d9c17373d099a2653513a5689a3cfa7dbe98bb17e33a436c186665 is not running
rep 3
Error response from daemon: Container f82fc2862216fcf5a8f53692034785b11e5f3cb709ce37c0aec0b94e5b91c870 is not running
Error response from daemon: Container 2cc0570c22d740cfb3a0576ee3c2e9b2b12efe6f36c22abe2ed4f0bb39f5598a is not running
rep 4
Error response from daemon: Container b3f1a934d0d9c17373d099a2653513a5689a3cfa7dbe98bb17e33a436c186665 is not running
Error response from daemon: Container 2cc0570c22d740cfb3a0576ee3c2e9b2b12efe6f36c22abe2ed4f0bb39f5598a is not running
Error response from daemon: Container ec780557852d406bab008d1df771d6b3a0ea2a4a896995bf3e6f8f6bf8f9e8ee is not running
Replicas started successfully

Running ResilientDB clients...
cl 1
Error response from daemon: Container ec780557852d406bab008d1df771d6b3a0ea2a4a896995bf3e6f8f6bf8f9e8ee is not running
Error response from daemon: Container 857462a1b32fb27589536499c7ed6838d6e11b231ba82d04ff8e5217be8b8861 is not running
Clients started successfully
Error response from daemon: Container 857462a1b32fb27589536499c7ed6838d6e11b231ba82d04ff8e5217be8b8861 is not running
scripts/result.sh: line 48: 0 + : syntax error: operand expected (error token is "+ ")
(standard_in) 2: syntax error
expr: division by zero
(standard_in) 1: syntax error
Throughputs:
0:
1:
2:
3:
scripts/result_colorized.sh: line 51: 0 + : syntax error: operand expected (error token is "+ ")
Latencies:
(standard_in) 2: syntax error
latency 4:

idle times:
Idleness of node: 0
Idleness of node: 1
Idleness of node: 2
Idleness of node: 3
Memory:
0: 0 MB
1: 0 MB
2: 0 MB
3: 0 MB
4: 0 MB

expr: division by zero
avg thp: 0:
(standard_in) 1: syntax error
avg lt : 1:
Code Ran successfully ---> res.out

I'm not sure if this is an issue specifically with Docker or WSL2 itself? I'm new to using Docker so if you could point me to a good resource that would be great! Let me know if I'm missing any information to add to my issue.

Thanks.

Best,
Alejandro

'NN:: Exception' when trying to run ResilientDB-Docker with large number of clients/replicas

Dear,
I tried to run resilientDB Blockchain using Docker. It works fine with a small number of clients/servers. However it gives me the error shown on the snapshot below when the number of clients/replicas is >= 10.
For your information, the following three commands give me the same error:
./resilientDB-docker --clients=10 --replicas=4
./resilientDB-docker --clients=1 --replicas=10
./resilientDB-docker --clients=10 --replicas=10

However, it works fine if i run it as follows,:
./resilientDB-docker --clients=9 --replicas=4
./resilientDB-docker --clients=9 --replicas=9
./resilientDB-docker --clients=1 --replicas=4
Capture

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.