GithubHelp home page GithubHelp logo

gcash / bchd Goto Github PK

View Code? Open in Web Editor NEW
279.0 279.0 101.0 29.82 MB

An alternative full node bitcoin cash implementation written in Go (golang)

License: ISC License

Go 91.73% Shell 0.04% Dockerfile 0.01% Makefile 0.04% JavaScript 5.40% Python 0.45% HTML 0.05% TypeScript 2.27%

bchd's People

Contributors

0xmichalis avatar aakselrod avatar acidsploit avatar cfromknecht avatar cpacia avatar dajohi avatar davecgh avatar dependabot[bot] avatar drahn avatar ekliptor avatar emergent-reasons avatar flammit avatar gubatron avatar halseth avatar jcramer avatar jcvernaleo avatar jimmysong avatar jongillham avatar jrick avatar martelletto avatar owainga avatar roasbeef avatar stevenroose avatar swdee avatar tsenart avatar tuxcanfly avatar tyler-smith avatar wallclockbuilder avatar wpaulino avatar zquestz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bchd's Issues

Make stall ticker detect very slow syncPeer

Based on the conversation here, it is possible to have a syncPeer that stays connected but is very slow. It's probably hard to know who is the cause of slowness, but if we can detect the problem and drop the current syncPeer when transfer is very slow, it should help on average.

can't sync with BU 1.5.1

Can't read message from 127.0.0.1:8333 (outbound): ReadMessage: unhandled command [xversion]

BU 1.5.1 removed xversion command?

Implement pruned mode

Bchd lacks pruning functionality and is a major drawback for potential users. We'll need to prioritize this as soon as possible.

Extremely long sync when setting addrindex=1 and txindex=1

I have been waiting for literally weeks now for one of our btcd nodes to sync up the blockchain activating the addrindex=1 and txindex=1 settings in the conf file.

This node is currently syncing 1 block every 5s, on an AWS r5.xlarge instance (2 vCPU | 16Gb RAM | 10,000 Mbps down | 3,500 Mbps up | 600Gb SSD).

Here is the most recent log tail:

...
2018-09-28 16:29:39.869 [INF] SYNC: Processed 2 blocks in the last 10.96s (4484 transactions, height 498477, 2017-12-09 23:28:04 +0000 UTC)
2018-09-28 16:29:54.888 [INF] SYNC: Processed 3 blocks in the last 15.01s (6659 transactions, height 498480, 2017-12-09 23:52:10 +0000 UTC)
2018-09-28 16:30:09.940 [INF] SYNC: Processed 3 blocks in the last 15.05s (6247 transactions, height 498483, 2017-12-10 00:17:36 +0000 UTC)
2018-09-28 16:30:21.293 [INF] SYNC: Processed 2 blocks in the last 11.35s (4155 transactions, height 498485, 2017-12-10 00:32:03 +0000 UTC)
2018-09-28 16:30:34.038 [INF] SYNC: Processed 3 blocks in the last 12.74s (7728 transactions, height 498488, 2017-12-10 00:57:09 +0000 UTC)
2018-09-28 16:30:44.168 [INF] SYNC: Processed 2 blocks in the last 10.12s (4506 transactions, height 498490, 2017-12-10 01:01:01 +0000 UTC)
2018-09-28 16:30:57.846 [INF] SYNC: Processed 3 blocks in the last 13.67s (6832 transactions, height 498493, 2017-12-10 01:41:58 +0000 UTC)
2018-09-28 16:31:12.427 [INF] SYNC: Processed 3 blocks in the last 14.58s (5194 transactions, height 498496, 2017-12-10 01:48:33 +0000 UTC)
2018-09-28 16:31:23.520 [INF] SYNC: Processed 2 blocks in the last 11.09s (3946 transactions, height 498498, 2017-12-10 01:52:45 +0000 UTC)
2018-09-28 16:31:38.558 [INF] SYNC: Processed 3 blocks in the last 15.03s (6250 transactions, height 498501, 2017-12-10 02:19:25 +0000 UTC)
2018-09-28 16:31:51.340 [INF] SYNC: Processed 2 blocks in the last 12.78s (3103 transactions, height 498503, 2017-12-10 02:21:42 +0000 UTC)
2018-09-28 16:32:01.644 [INF] SYNC: Processed 2 blocks in the last 10.3s (4246 transactions, height 498505, 2017-12-10 02:38:57 +0000 UTC) 

Here is the top command result:
ubuntu 20 0 2006320 655232 9668 S 359.7 2.0 1083:46 btcd

CPU is intensively used (359%).

We still have to wait ~3 days to fully sync the blockchain, and that's a lot of money spent.
Note: we witnessed a performance degradation over time and restart the btcd service every 6h to bypass the issue.

I wonder why is this process of validating the blocks is so long compared to syncing blocks without those addrindex and txindex settings.

Port codebase to Bitcoin Cash

Porting this codebase to BCH is of moderate difficulty. The goal is to do it the right way with full rebranding and gutting segwit from the app so that we have a clean break from btcd and a solid foundation to start building.

This is an Epic. Make sure you have the ZenHub browser extension installed to view the board.

Addblock fails assert when loading from bootstrap

My pre-BCH bootstrap fails to load when trying to seed using addblock:

$ addblock -i /mnt/bootstrap_0-478557.dat
2018-11-07 05:21:23.682 [INF] MAIN: Loading block database from '/home/bitcoincash/.bchd/data/mainnet/blocks_ffldb'
2018-11-07 05:21:23.686 [INF] MAIN: Block database loaded
2018-11-07 05:21:23.687 [ERR] MAIN: Failed create block importer: assertion failed: blockchain.New excessive block size set lower than LegacyBlockSize

Handle max peers from a single IP

This comment can be found in server.go

// TODO: Check for max peers from a single IP.

Basically we don't want a single IP to be able to eat up all available connections.

Investigate libsecp256k1

The current secp256k1 implementation is slightly slower than the implementation in https://github.com/piotrnar/gocoin and we might want to consider swapping it out.

/tmp/___gobench_benchmark_test_go -test.v -test.bench "^BenchmarkVerify|BenchmarkVerify2$" -test.run ^$
goos: linux
goarch: amd64
pkg: verify
5000        233638 ns/op
10000        194663 ns/op
PASS

Also we should test out the c bindings for libsecp256k1 as I presume that would be the fastest option. It would also give us ECMH capability which we may eventually need as well.

Save default config file regardless of binary location

If you run the binary from any other location than where the source code is located you get this error:

Error creating a default config file: open /home/chris/sample-bchd.conf: no such file or directory

The function that creates it is here: https://github.com/gcash/bchd/blob/master/config.go#L1075

And it has this comment:

// We assume sample config file path is same as binary

We should probably make it save the default config regardless of the binary location. Especially considering that the function creates and saves a default rpc username and pw.

I think this will probably require using go-bindata to store the sample-bchd.conf as a binary file.

Rebrand to BCH

We need to go package by package and change all references to btc to bch and all references to bitcoin to bitcoin cash.

race condition with blockchain.New() writing to wire package variables

I hesitate to file this. Maybe it is good to be searchable and it also might show a class of race conditions involving package level variables.

The race detector (go test -race ./...) found a race condition:

  • blockchain.New() sets some variables in the wire package.
  • The detector specifically caught wire.MaxMessagePayload being read and written by two different tests.

That wouldn't matter if, for example, blockchain.New() is only called once, but I found it it used in:

  • cmd/addblock/import.go
  • cmd/findcheckoint/findcheckpoint.go
  • and of course server.go

So there is a remote chance of mangled data using those commands. There also might be a class of problems with package variables.

Progress logging when fully synced

The logging looks like this:

SYNC: Processed 1 block in the last 17m36.6s (304 transactions, height 555567 of 555554, progress 100.00%, 2018-11-06 23:29:18 -0500 EST, ~2 MiB cache)

Maybe it should stop logging progress when fully synced. @swdee what do you think about this?

Convert all addresses in tests from legacy to cashaddress format

Many tests have address strings in them that are decoded for the purpose of the test. The tests may still pass with the legacy address but to make sure the codebase is as true to BCH as possible they should all be converted to cashaddresses.

This needs to be done after the cashaddress is implemented in bchutil.

Set up responsible disclosure process

One of the issues identified by both Cory Fields and Awemany is it was difficult to find a PGP key and email to submit critical bugs to.

We should come up with a way to make it sure it's easy for people to figure out how to send us confidential emails.

Selfish mining mitigation

When confronted with two chains at the same height, the block we should build on top of should be the one with the most accurate timestamp. This helps slightly mitigate selfish mining.

ConnManager will make multiple outgoing connections to the same peer

This is rather unlikely that this will affect the full node as outgoing addresses are chosen at random and there are a lot to chose from.

But given the low number of bchd nodes on the network to connect to the neutrino wallet (which uses this code) will end up making multiple connections to the same peer.

Hook up tests to travis

Travis badge should go on the main README and possibly in the README in each sub package.

Update README badges in all packages

There is a CI badge in the README in each package that is currently pointing to btcsuite. Should remove the badge and/or replace with a travis badge for this repo.

Implement fast sync mode

Fast sync has always been possible but nobody has bothered implement it. There are a number of different ways to do so and it seems people are holding out for an "ideal' solution.

While #17 will likely improve the initial block download speed, we should be able to better.

The easiest way to implement fast sync is to just extend the checkpoint object to include the hash of the UTXO set and a data source.

Then we can just put the UTXO set at that checkpoint up on AWS or IPFS and have the node download the UTXO set at start up, parse it, validate the hash against the checkpoint and save to the database.

People will complain about the security model of this but in reality it's not much different than syncing from genesis. In both cases your options are:

Syncing from genesis: trust the developers hardcoded the correct genesis block or compile the software from source and verify the genesis block against a known good source.

Fast sync from checkpoint: trust the developers hardcoded the correct checkpoint and UTXO hash or compile the software from source and verify the checkpoint and UTXO hash against a known good source.

Not that much different.

This is far from the ideal way of doing it, but IMO it's better to have something than nothing. Especially with IBD taking longer.

The first hurdle here is to figure out how to hash the UTXO set. Ideally we would use the same ECMH algorithm that is being developed by Thomas van der Wassen for ABC, but we'll need to port that over which isn't that easy. Or maybe we could use cgo.

If we must we could hash it some easier way for now and change the algorithm later. Since nothing is committed we can change the algorithm by just making a new release.

This also requires a utility cmd to calculate the hash of the UTXO at a given block so that we can build the checkpoint and others can verify it.

Sync stopped at 382452 with error: block contains too many signature operations - got 20001, max 20000

I am running a bchd node with txindex=1 and addrindex=1 and was syncing it for the past few days, it stopped at block height 382452:

2018-10-07 18:24:30.513 [INF] SYNC: Rejected block 00000000000000000a3acbef6eaa006b653bebccd2cff7228e499832739aea07 from 54.90.148.132:8333 (outbound): block contains too many signature operations - got 20001, max 20000
2018-10-07 18:26:26.526 [INF] SYNC: Rejected block 0000000000000000005e4051f988f84efc2334f033bd3d2071c7af336fa4f65e from 54.90.148.132:8333 (outbound): previous block 00000000000000000a3acbef6eaa006b653bebccd2cff7228e499832739aea07 is known to be invalid
2018-10-07 18:27:09.402 [INF] CHAN: Adding orphan block 0000000000000000082ac7db995ea28840848fa97658dc3c23dccec810bcda27 with parent 0000000000000000005e4051f988f84efc2334f033bd3d2071c7af336fa4f65e
2018-10-07 18:27:09.752 [INF] CHAN: Adding orphan block 00000000000000000e4b183a44e6fc1aceb142ce8eda3cda71e9b36b5c16a09a with parent 0000000000000000082ac7db995ea28840848fa97658dc3c23dccec810bcda27

I will go ahead and get the last version of bchd, but is this a none issue @cpacia ?

Add all active script flags where appropriate

At least two places introduced in PR #59:

var scriptFlags txscript.ScriptFlags

var scriptFlags txscript.ScriptFlags

This works as expected currently because none of the downstream users branch based on any other flag but this might not always be true. We should make these the full set of active flags.

Part of this is probably abstracting the instantiation of these flag sets based on the current environment.

Make sure all tests still pass

This should be done after rebranding and also after implementing each hardforking change

  • After rebranding
  • After segwit removal
  • After August 1st, 2017 fork
  • After November 13, 2017 fork
  • After May 15th, 2018 fork
  • After November 15th, 2018 fork

Chain download stalls with 'No sync peer candidates available'

I've seen this several times now. Can't imagine how it's getting into a state where it has eight peers connected and it thinks none of them are viable sync candidates.

At minimum if it gets in this state I think it should disconnect from all peers as that should trigger connections to fresh peers which will restart the sync.

But I identifying why this is being buggy would be nice.

Create UTXO memory cache

Bchd does not have a memory cache and the result is initial block download is dreadfully slow as it commits the utxos set changes after each downloaded block.

There is an open PR in btcd to add a memory cache but it looks like it's a work in progress and not fully functional. btcsuite/btcd#1168

We'll have to build on that and get it working asap because the speed of initial block download at this point is so slow that it basically makes the implementation unusuable.

Not convinced this is the only bottleneck to faster chain download but it will be a good start.

Switch to gRPC API

gRPC provides a much nicer interface than the JSON RPC and websocket APIs. There is a WIP PR in btcd to add gRPC support btcsuite/btcd#1075.

We could build on that and get it to where it needs to be to merge in here. Once that's in I would prefer to remove all JSON RPC API altogether and get rid of the btcjson package.

gRPC has a nice feature in that there is a plugin that lets you spawn a REST reverse proxy to the API for people who absolutely need a REST JSON interface.

In the course of doing so we should make sure we add all the RPCs needed to use bchd as a backend SPV wallet server. This would imply a creating a getheaders endpoint which accepts a block locator just like the wire message does. Plus a streaming headers endpoint for wallets to subscribe to.

Finally we'll need to serve SPV proofs either from a separate API or with the transactions response.

4x32Mb block verification passed in 8 mins

That's not an issue but a performance report.

This morning 4 blocks of 32Mb have been produced by BMG pool using an undetermined version of node implementation (SV flavored?): source

Good news is that bchd didn't crash (like some other nodes did) and that it treated the blocks in less than 8 mins (we were resyncing a node).

Here is the logs of our bchd node (txindex=1 and addrindex=1):

2018-11-10 19:13:36.495 [INF] SYNC: Processed 1 block in the last 11m59.94s (166882 transactions, height **556045**, 2018-11-10 14:34:35 +0000 UTC)
2018-11-10 19:16:59.256 [INF] SYNC: Processed 1 block in the last 3m22.76s (166799 transactions, height **556046**, 2018-11-10 14:49:35 +0000 UTC)
2018-11-10 19:17:00.001 [INF] CHAN: Adding orphan block 000000000000000000817e1110df7dbbc54eb32c408dc95653ecec5756ffd95b with parent 0000000000000000001846c8a93446ec91ae2df546443037727bec72902ad705
2018-11-10 19:17:00.323 [WRN] PEER: Received reject message from peer 80.179.226.48:8333 (outbound), code: REJECT_NONSTANDARD, reason: dust
2018-11-10 19:17:35.839 [INF] SYNC: Processed 1 block in the last 36.58s (22892 transactions, height 556047, 2018-11-10 14:57:54 +0000 UTC)
2018-11-10 19:21:31.859 [INF] SYNC: Processed 1 block in the last 3m56.02s (166337 transactions, height **556048**, 2018-11-10 15:07:34 +0000 UTC)

F] SYNC: Processed 1 block in the last 3m22.18s (166664 transactions, height **556049**, 2018-11-10 15:33:34 +0000 UTC)

Find a way to simplify use or instantiation of KeyDB in SignTxOutput

I tried to switch from btcd imports to bchd ones, unfortunately this code I had does not compile anymore:

lookupKey := func(a btcutil.Address) (*bchec.PrivateKey, bool, error) {
    return priv, true, nil
}
sigScript, err := txscript.SignTxOutput(&chaincfg.MainNetParams,
		redeemTx, 0, utxoSourceAmount, originTx.TxOut[0].PkScript, txscript.SigHashAll,
		txscript.KeyClosure(lookupKey), nil, nil)

I updated to the new way of doing so using Closures but this code does not compile as well:

func ... { 
//...
sigScript, err := txscript.SignTxOutput(&chaincfg.MainNetParams,
		redeemTx, 0, utxoSourceAmount, originTx.TxOut[0].PkScript, txscript.SigHashAll,
		mkGetKey(map[string]addressToKey{
			sourceAddress.EncodeAddress(): {priv, true},
		}), nil, nil)
//...
}

func mkGetKey(keys map[string]addressToKey) txscript.KeyDB {
	if keys == nil {
		return txscript.KeyClosure(func(addr btcutil.Address) (*bchec.PrivateKey,
			bool, error) {
			return nil, false, nil
		})
	}
	return txscript.KeyClosure(func(addr btcutil.Address) (*bchec.PrivateKey,
		bool, error) {
		a2k, ok := keys[addr.EncodeAddress()]
		if !ok {
			return nil, false, nil
		}
		return a2k.key, a2k.compressed, nil
	})
}

I am surely missing something here, but I think it's a pretty complicated way to pass the private key for signature, even if it was originally created as an helper and for a better security.

We could introduce an additional simpler method to do so.

Just my 2 cts. What do you think?

better remote node selection [split network attack]

Interesting read about splinting network https://ripe77.ripe.net/presentations/19-ripe_15_10.pdf

I didn't personally check bchd but I assume remote node selection just simple random from seed (same as bitcoind), but there high chance it could select all 8 nodes within same /24 range. IPv6 need different network mask for that /48 or even /32 to be sure

Maybe we should improve that process and avoid connecting nodes that close within each other in term IP space, thoughts?

Bug: Change on handleGetNetworkHashPS() to return float64 breaks rpcclient code

In the following two commits 0ef2e33 and e536312 the handleGetNetworkHashPS() function was changed from the upstream source to return a float64 instead of the expected int64.

As explicitly stated in the code comments this function returns an interface{} but all values returned should be int64. By returning a float64 the rpcclient code is now broken

var result int64
and terminates at run time due to it failing to unmarshal the JSON.

json: cannot unmarshal number 181192779577.548 into Go value of type int64
exit status 1

Can someone provide some insight as to why this was changed to float64?

Inventory Broadcast Max

Inventory Broadcast Max was a limit in the Core codebase that limited how may txs were relayed to other peers and thus limited the size of blocks that could be created.

In a quick glance at the codebase I don't see the same thing in bchd, however, there is an effective equivalent in maxInvTrickleSize which limits the inv packet to 1000 items. And since the default trickle interval is 10 seconds, this means the node wont send out more than approximately 30mb worth of txs every 10 minutes.

As part of the zeroconf improvement we will likely be reducing or eliminating the trickle interval so txs are relayed immediately. I'm thinking about making the default trickle interval zero and if the interval is set to zero then the trickle timer is never started and inventory is broadcast immediately. This would leave the trickle in place as a config option. This would address the limit when the interval is zero or very low but maybe we should also look into increasing maxInvTrickleSize.

Refactor block/tx sanity checks to use txscript.ScriptFlags for feature toggling

PR #51 introduced a boolean flag for whether or not the magnetic anomaly hardfork is active for the purposes of counting sigops.

A better solution is to use the existing ScriptFlags defined in the txscript subpackage. This will allow us to represent all potentially toggled features with a single object instead of passing around a ton of booleans in the future.

After looking at the ABC code, they do pretty much the same thing with their BlockchainValidationOptions type: https://github.com/Bitcoin-ABC/bitcoin-abc/blob/efa9749bac3e20b2a8fc056c01d75e3dbc4e341d/src/validation.cpp#L3025

Integration tests hide node errors and don't always clean up after themselves

I found that if the node has problems during the integration tests, the node errors are silenced, the harness errors out, and then the rpc test fails to clean up.

You can manually reproduce it in commits before this one:

  • Change anything that will cause the node to fail quickly.
    • For example in bchd.go, change the first err check to:
    if err == nil {
    	return os.ErrClosed
    }
    
  • Run an integration test, e.g.:
    • go test -tags rpctest ./integration/bip0009_test.go
  • Check /tmp and you should find:
    • /tmp/rpctest-data<x>/ (empty)
    • /tmp/rpctest-logs<x>/ (empty)
    • /tmp/bchd/rpctest/harness-<x>/ (rpc.cdrt, rpc.key, rpctest.pid)
  • Check test output and it will give no indication of what happened on the node or even that the node failed

UTXO cache takes more RAM than specified.

First, utxo memory cache is a great feature :)
Not sure if this is a bug, but when running with --utxocahcemaxsize=2048, I am getting above 8GB of RAM used during initial synchronization.
This might cause unnecessary slowdowns for paging and/or crashes.

Allow syncing from pruned nodes

Pruned nodes signal themselves with the NodeNetworkLimited service flag. Since they are capable of serving a least a limited number of blocks, we should allow syncing from them if appropriate.

Things to do:

  • Allow nodes signaling NodeNetworkLimited to be selected as a sync candidate if we are less than 30 days from the tip. 30 days is just a guess at an appropriate number I suppose it could be a little longer.

  • We don't currently allow outgoing connections to nodes not signaling NodeNetwork. Ideally we would allow outgoing connections to NodeNetworkLimited nodes if we are less than 30 days from the tip, but at startup we don't really know how far we are from the tip. Maybe the best way to handle this is to allow at most half of our outgoing slots to be taken up by pruned nodes.

  • If we send a GetBlocks message to a pruned node and they don't have the block, they will respond with a NotFound message. We don't currently handle the NotFound and instead will eventually timeout. We need to make it handle the NotFound response and disconnect the peer.

GetBlock RPC verbosity problem

Can't seem to get the verbosity flag to work right:

chris@chris-spectre:~$ bchctl getblock 000000000000000001197464c07a27f7903a694fff7367d0452a2fe8a26056fc true
-32602: parameter #2 'verbosity' must be type uint32 (got bool)
chris@chris-spectre:~$ bchctl getblock 000000000000000001197464c07a27f7903a694fff7367d0452a2fe8a26056fc 1
-32602: parameter #2 'verbosity' must be type uint32 (got bool)
chris@chris-spectre:~$ bchctl getblock 000000000000000001197464c07a27f7903a694fff7367d0452a2fe8a26056fc 2
getblock command: parameter #2 'verbose' must parse to a bool (code: ErrInvalidType)
Usage:
  getblock "hash" (verbose=true verbosetx=false)

Note the command helper says

getblock "hash" (verbose=true verbosetx=false)

but the main help menu reads

getblock "hash" (verbosity=1)

We should straighten this out.

Reindex chainstate missing?

I was not able to find a method equivalent to --reindex-chainstate (bitcoind) which would rebuild the local metadata from the blocks downloaded previously.

The only solution I found was to remove the metadata folder which actually restart the sync process from scratch and will take days to finish.

Did I miss something?

cryptolayer.net seed constantly failing

Every boot I see that seeder.cryptolayer.net is failing:

2018-10-07 14:12:31.503 [INF] CMGR: DNS discovery failed on seed seeder.criptolayer.net: lookup seeder.criptolayer.net on 127.0.0.53:53: server misbehaving

We should probably remove this seed, and if we think we need more seeds we should consider a bchd.cash seed.

version is repeated and does not match spec

Trying to isolate configuration for #102, versioning is one of the things I want to move out of config.

I found these issues:

  1. version constants are maintained in both main and commands.

  2. The version String() behavior does not match the spec in the comments

Website

If any web developers/designers out there would like to contribute a static website to promote the software and host the binaries that would be much appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.