GithubHelp home page GithubHelp logo

kwil-db's Introduction

Kwil

The database for Web3.

banner

Github-License Release Build-Status Go-Version GoDoc Go-Report-Card Discord

Kwil is the node software for Kwil Networks. Built with PostgreSQL and CometBFT, Kwil enables byzantine fault tolerant networks to be built on top of relational databases.

Overview

To learn more about high-level Kwil concepts, refer to the Kwil documentation.

To test deploying and using a Kuneiform schema (Kwil's smart contract language) on the Kwil testnet, refer to the Kwil testnet tutorial.

For more information on kwil-db, check out the Kwil node documentation.

Quickstart

Build Instructions

Prerequisites

To build Kwil, you will need to install:

  1. Go 1.22 or 1.23

  2. (optional) Protocol Buffers, with the protoc executable binary on your PATH.

  3. (optional) Taskfile

  4. (optional) Protocol buffers go plugins and other command line tools. The tool task will install the required versions of the tools into your GOPATH, so be sure to include GOPATH/bin on your PATH.

    task tools

Only Go is required to build directly from the cmd/kwild folder or via go install, although developers may require the other tools.

To run Kwil, PostgreSQL is also required. See the documentation for more information.

Build

The build task will compile kwild, kwil-cli, and kwil-admin binaries. They will be generated in .build/:

task build

You may also build the individual applications manually:

cd cmd/kwild
go build

Or without even cloning the source repository:

go install github.com/kwilteam/kwil-db/cmd/[email protected]

Just replace v0.7.0 with the desired version or latest.

Running kwild

Running kwild requires a PostgreSQL host running. Since the default configuration of most PostgreSQL packages requires changes for kwild, the easiest is to run our pre-configured Docker image:

docker run -p 5432:5432 -v kwil-pg-demo:/var/lib/postgresql/data \
    --shm-size 256m -e "POSTGRES_HOST_AUTH_METHOD=trust" \
    --name kwil-pg-demo kwildb/postgres:latest

The first time this is run, it will pull the kwildb/postgres image from Docker Hub and create a new persistent Docker volume named kwil-pg-demo. NOTE: This command requires no authentication with postgres, and should not be used in production.

task pg may be used to run the above command.

You can then start a single node network using the kwild binary built in the previous section:

# Use the full path to kwild if it is not on your PATH.
kwild --autogen

With the --autogen flag, the node automatically creates a new random network and validator key, and the node will begin producing blocks.

For more information on running nodes, and how to run a multi-node network, refer to the Kwil documentation.

Resetting node data

By default, kwild stores all data in ~/.kwild. To reset the data on a deployment, remove the data directory while the node is stopped:

rm -r ~/.kwild

Then delete the PostgreSQL database. If using the Docker image or service, delete the container and it's volume:

docker container rm -f kwil-pg-demo
docker volume rm -f kwil-pg-demo

task pg:clean may be used to run the above commands.

If using a system install of postgres, recreate the database with psql:

psql -U postgres -h 127.0.0.1 -d postgres \
    -c "DROP DATABASE IF EXISTS kwild" \
    -c "CREATE DATABASE kwild OWNER kwild"

Unified kwild + postgres Quickstart Docker Service

For development purposes, the deployments/compose/kwil folder contains a Docker Compose service definition that starts both kwild and postgres, configured so that they will work together out-of-the-box with no additional configuration changes. Start it by running the following from the deployments/compose/kwil folder in the repository:

cd deployments/compose/kwil
docker compose up --build -d

With the -d option, the service(s) will be started as background processes. To stop them or view logs, use the docker container commands or the Docker Desktop dashboard.

On start, this service definition will create a testnode folder in the same location as the docker-compose.yml file, and a persistent Docker volume called kwil_pgkwil for the postgres database cluster files.

This also runs with the --autogen flag, creating a new randomly generated chain, and is not intended for production use. However, the service definition may be used as a basis for a customized deployment.

Extensions

Kwil offers an extension system that allows you to extend the functionality of your network (e.g. building network oracles, customizing authentication, running deterministic compute, etc.). To learn more about the types of extensions and how to build them, refer to the extensions directory README.

Contributing

We welcome contributions to kwil-db. To contribute, please read our contributing guidelines.

License

The kwil-db repository (i.e. everything outside of the core directory) is licensed under the Apache License, Version 2.0. See LICENSE for more details.

The kwil Go SDK (i.e. everything inside of the core directory) is licensed under the MIT License. See core/LICENSE.md for more details.

kwil-db's People

Contributors

jchappelow avatar brennanjl avatar yaiba avatar charithabandi avatar kwilluke avatar chappjc avatar randalf-sr avatar

Stargazers

Josh Bowden avatar Alexandros Hatzopoulos avatar Padelis Deligiannis avatar chris lee avatar Adewale IyanuOluwa Isaac avatar Fractal Visions avatar  avatar Jun Jiang avatar Hassan Shah avatar HHS avatar Raffael Campos avatar Paulo Koch avatar web the third avatar  avatar Ryan Soury avatar Alexandr Kovalenko avatar  avatar NevvDevv avatar Guenit avatar Gordon avatar  avatar DerSkythe avatar CrisOG avatar Rani avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar chris lee avatar  avatar  avatar

kwil-db's Issues

cosmos account

To have a blockchain account, those are the requirements:

  1. compatiable with Ethereum algo

Will do on top of cosmos-sdk auth module

support unconfirmed tx in TxQuery endpoint

Right now the TxQuery rpc use rpc/client/local.Tx, which could return nil if the transaction is in the mempool.

A promising function we could use is rpc/client/local.UnconfirmedTxs if we got a nil transaction, need more research

support readable data signing

Currently when sign a message, we're signing a hash of the underlying data, like in picture below
Image

We need to support structured data signing(like eip712), which will make the message being sign readable to user, this involves making change to signing and verification

CLI for Join request stats

We need CLI support for bunch of things:

  • Outstanding join requests
  • Join request status for a given node
  • Network Approved nodes
  • Approval votes rcvd

Crash recovery

Persist Application state info and changeset info to the disk and support recovery using that state

New CLI against the kwild server

Need new CLI for configuration, Should support something like below:

kwild start
kwild validator approve
kwild validator join
kwild validator leave
kwild snapshots??

[ENG-197] Kuneiform Parser Needs to Update Extension Struct

The Kuneiform Parser (at least within the main Kwil repo) does not use the updated extension structure for the schema:

New:

type Extension struct {
	Name   string
	Config []*ExtensionConfig
	Alias  string
}

type ExtensionConfig struct {
	Argument string
	Value    string
}

Old:

type Extension struct {
	Name   string            `json:"name"`
	Config map[string]string `json:"config"`
	Alias  string            `json:"alias"`
}

This change was necessary because RLP does not support maps.

ENG-197

log format adjust

We have two log format rn:

  • log from cometbft, I[2023-08-24|15:47:12.461] service start module=proxy msg="Starting multiAppConn service" impl=multiAppConn
  • log from kwild, {"level":"info","ts":1692910032.539883,"caller":"sessions/session.go:481","msg":"closing atomic committer"}

Make those to the same format would be nice

Node should maintain Approved Validator nodes list

Requirements:

  • A file to maintain Approved validator nodes list
  • CLI to add/remove a validator node to/from the list

Update the ValidatorUpdate RPC endpoint to check the approved validator list to make decision on whether to add the validator to the list.

Delete should not remove the validator from the list, incase if the node wants to re-join as a validator, it can do so without getting the approvals again (should there be a time limit?)

Consensus based Validator update approvals

If we need all the nodes in the network to approve a new validator, we would need some kind of consensus

Requirements:

  • Reactor to broadcast the messages to all the existing validators (or all peers) - async calls
  • Mechanism to process the responses (only from its validator set, can ignore the responses from other nodes)
  • if required votes are received. Issue an internal ValidatorUpdate Tx

Nodes must agree on network joiners

  1. Research tendermint to see if it can provide this consensus.
  2. If yes, implement into Kwil's consensus mechanism

Feel free to add additional tasks are context you see necessary.

potential replay attack on `view mustsign` action

This is an issue only when data privacy is a concern.

Since request for view action is not transaction, but signature(action restrict access by verifying caller address) is required, this request could be replayed

Updated:
The definition of view action changed, it only indicates action is read-only.
view mustsign will imply singature is required.

Updated 092923:
Given current structure of pkg/transactions.CallMessage, it's fairly easy to do a replay attack, the reason being lack of a field that will make a signed CallMessage invalid after it's verified once.

Kwil Configuration Layout

Listing out the required configuration for operation of the kwil nodes

./node
       - /config
           -  kwil.toml (kwil specific configuration as specified below)
           -  config.toml
           - genesis.toml
           - node_key.json
           - priv_validator_key.json
       -  /data (all comet bft data)
           -  priv_validator_state.json

Locations for the below data stores are configurable
- /sqlite (datastore, accountstore and validator store)
- /wal.    (commit wal)
- /state (kvstore)

Kwil specific configuration:

[RPC config]

grpc_listen_addr:
rpc_listen_addr:

[Extensions]

endpoints:

[kwil db]

db_dir:
wal_dir:
state_dir:

[Accounts]:

enable_gas_costs:
enable_nonces:

[Snapshots]:

enabled:
recurring_height:
max_snapshots:
snapshot_dir

[Bootstrapper]:

snapshot_dir

[Logging]

log_level

------- Miscellaneous ---------
[Arweave]:

BundlrURL

Not sure what all config we would have for TLS, once we have more info we can hook that in. Also private keys are still a todo

Tools for creating network

Create a Kwil utils command tool to generate testnet, single kwil node configs, validator config, node key config etc.

Image

To launch a testnet:

kwild testnet --v 4 --o ./output --populate-persistent-peers --starting-ip-address 192.168.10.2

kwild testnet --v 3 --o ./output --populate-persistent-peers --hostname 192.168.10.10 --hostname 192.168.10.20 --hostname 192.168.10.30

Image

if --disable-gas option is provided, the chainID would be prefixed with "kwil-chain-gcd" else the chainID would be "kwil-chain-gce"

enhancement: stand-alone kwild binary

As discussed in #247, kwild should be its own stand-alone binary. This should include:

  • Configuration precedence (low to high): default config, config file, env, flags.
  • An external config interface controlled by us. Right now, we include CometBFT's in our own; this should be changed to directly translate to CometBFT's. This is to solve for issues as seen here: #220 (comment)
  • It should provide a config.toml.sample in the root directory.

The binary should support the following flags:

  • Config flags, for any value within the config file
  • --quickstart, which will auto-generate private key and genesis if it does not exist
  • --root_dir, which specifies the root directory (AFAIK this would not exist within the config file, since this specifies where the config is).

ENG-198

Establish commit boundaries

All the commits should occur at the block boundaries to ensure the atomicity of transaction execution in the block.

The commits to be written to the DB only after the block is committed, till then use WAL for holding the intermediary state transitions

public key address format

ed25519 is supported in #191, but we haven't have an address format for it. It's hex(pubKey[:20]) right now.

This is also a concern because we use Ethereum address as our default user wallet address, which is 0x + hex(20bytes), we need different address format to:

  • identify different address
  • avoid collision
  • support more Key in the future

SQL state branching in memory

Due to tendermint's design, there will be couple of stages before a block is commited, thus branched state is needed for these different stages:

  • checkTx
  • deliverTx
  • query

Basically what need to be done is load to-be-changed sqlite database into memory, make a branch in the memory(may need nested branch for different stage)

make api structure consistent between grpc and http

We use grpc-gateway to generate a http server from protobuf.

Multi letter fileds by default will generate a camelCase json tags, we want to make all those kind of fileds using snake_case.

This mainly require changing proto files, know fields:

  • tx_hash
  • payload_type
  • gas_used
  • gas_wanted
  • tx_result

Statesync support

If the blockchain is truncated, need some kind of snapshotting for the new nodes to get up-to-date DB state.

  • Use ABCI snapshot interfaces to snapshot kwildb along with the BC snapshots and replay the snapshot in the init while joining.

bug: Deploy/Drop Database are not idempotent

A core requirement of our two phase commit protocol is that everything written to the WAL must be idempotent. Currently, deploy/drop is not idempotent. We need to make changes to the engine to make these idempotent.

support execute `view` action

kwilteam/kuneiform#13

For view action to work, client need to know what type the action is before call it:

  • a view action won't change state, thus it's not a transaction
  • a sparate RPC for non-tx action executation
  • even it's not a transaction, we need singnature for this RPC

RPC endpoint to monitor the progress of Validator approvals

Maybe we need an RPC Endpoint to query the status of the Validator update request:

RPC Endpoint: /ValidatorStatus?val="pubkey"

Response should/may contain:

  • #positive votes received
  • #negative votes received
  • #minRequiredVotes
  • [list of validators that voted for the node]
  • [list of validators voted against the node]

configurable RPC service registration

For various reasons, we need to make configuration on which RPC(database related) is exposed.

For example, some kwil-chain don't want any user to deploy a new database, some do.
In this example, deploy_database cannot be exposed, currently, deploy_database is handled byTxService.Broadcast, to support this level of configuration, we kind need to put those type of message handling into separate rpc. If we do this way, existing sdk need to be updated accordingly

[ENG-196] RLP Encoding Order Tags

Today, we found out that the order of elements in a struct do matter for RLP encoding. For example:

type Struct1 struct {
	Val1 uint64
	Val2 string
}

type Struct2 struct {
	Val2 string
	Val1 uint64
}

If I encode Struct1, it cannot be decoded into Struct2.

This is quite a problem for us, since we change things quite a bit. Serialization / deserialization across languages is also a tricky beast (particularly in JS), and errors aren't found until runtime / can't be tested for.

Therefore, I think we should implement something like:

type Struct1 struct {
	Val1 uint64
	Val2 string
}

type Struct2 struct {
	Val2 string `order: "1"`
	Val1 uint64 `order: "2"`
}

where the struct tags specify the serialization and deserialization order. This would allow us to not have to worry about how the order might changes for any given payload.

I'd imagine that if a struct tag was not included, it would just order those after the numbered struct tags.

Not an immediate priority, but could be helpful in preventing bugs that would not come up in unit / integration tests.

ENG-196

proposal: kwild and utils separation, admin service, config decisions

This issue describes a possible way forward to accomplish the following:

  • simplify kwild config and commands
  • adding a node admin service and possibly a corresponding admin tool for node operators, not kwild users

The kwild binary as just a Kwil node

Narrow scope of kwild binary and config file to the Kwil server role

We can make kwild into a single-purpose executable with one job -- running the Kwil server/node.

In short, ./kwild server start => ./kwild, and extract the sub-commands under utils into a separate cmd/kwiladm or cmd/kwild-admin tool.

This would help simplify the meaning and scope of the config file (config.toml) which really just pertains to the node functionality of server start. In general, if there's a file like "kwild.conf", that applies to the operation of kwild, not just some sub-commands of it. In terms of simplicity and code architecture, we don't have to jump through hoops with sub-commands assigning their own flags and only conditionally using the config.KwildConfig struct that is currently being populated regardless of the command (with some tolerance for missing config file).

admin commands aka utils in a separate tool

The utils sub-commands can be made into primary commands of a kwil-admin tool. In addition to the current node utilities, this tool may also be the primary entry point for operations that can be categorized as "node operator" actions e.g. get peer info, add peer, ad hoc select, adjust other runtime-config of the running node, etc. See the next section for more about this.

It is trivial to distribute both binaries together.

Admin service (authenticated RPC server)

The existing gRPC service is used by end-users, and is unauthenticated (not including tx/message signature verification etc., which is not at the level of the service itself). Each of the endpoints are designed to support Kwil clients. There is a need for an authenticated service, with which only the node operator can interact. I've named some of the possible operations above.

This could be something other than gRPC, but we are fairly streamlined for gRPC so we should just make a api/protobuf/admin service. This service should be authenticated with a tls key pair, and the service using the RequireAndVerifyClientCert option. It is quite simple and requires no passwords, just a keyfile on the client side (the admin tool).

Note about validator actions: On one hand, issuing a join/approve is an action that an operator would take, but these are arguably not node admin actions. In particular, these could be done via kwil-cli (not a kwild sub-command) by providing a private key to sign the approve/join/leave tx and then broadcasting it on a public gRPC endpoint. These would not have to involve a hypothetical new admin service of a particular node. The controller of a node's private key could be initiating these validator actions without using the node in question.

There is however an argument to be made that validator actions are within the realm of admin/operator actions, but consider the above as well as the fact that in another architecture the validator can be distinct from the node (see the privval.SignerClient type in cometbft), although for simplicity we have baked the validator into the node.

Decisions about environment variables, home directories, config files

We are presently undecided about a few things:

  • Any remaining use case for environment variables. Does it simplify any Docker use or can flags just as easily be provided? Why do we not automatically get environment variables for every single config struct field? Is this just a matter of tagging, defining a prefix, running some viper command, etc?
  • If --home_dir is provided, should kwild not look for the config file there (if --config is not also provided)?
  • If neither --config nor --home_dir are provided, should kwild not look a default home directory?
  • If no config file exists in any searched locations, can we not treat this as if it were a config file with everything commented out (all defaults).

The last point above make senses to me, except that it's not sensible to have no private key set. Thus a final though is: shall we have the private key be stored in a file, with the default file being ${home_dir}/privkey.json and the config setting becoming ConfigFile or ConfigPath? Finally with such a separate key file, it might be sensible to autogenerate (as if init were run followed by starting back up)?

Use new Kuneiform

Kuneiform is re-write in ANTLR4, this imports couple of changes:

  • All Kueniform is in Kuneiform repo
  • A potentially new submodule of the Kuneiform ANTLR grammar
  • New features

cometbft logger often reports the wrong caller

By virtue of log.WithOptions(zap.AddCallerSkip(1)), we generally get the correct caller in the call stack, but for some reason the cometbft logs often report the wrong caller e.g.

{"level":"info","ts":1693413459.9544282,"caller":"runtime/asm_amd64.s:1650","msg":"Reconnecting to peer","module":"p2p","addr":"[email protected]:26756"}

I haven't looked at why that might be the case.

NEAR wallet signature format

NEAR web wallets that will be signing messages are different from ethereum wallets, and notably MetaMask does not support NEAR directly. NEAR has also established some unique conventions for creating ed25519 signatures as well as structuring non-tx messages:

I'm attempting to confirm what the Sender and Nightly wallets do.

I will also check with Fractal what wallets then anticipate NEAR users might be using and if then have any insights into these questions.

Comments/Concerns for concurrency (abci.go and sessions.go)

abci.go "BeginBlock" function

Concurrency Issue

a.commitWaiter.Wait()
a.commitWaiter.Add(1)
  1. The above code can allow more than one caller execute -> a.commitWaiter.Add(1) before the first caller executes -> a.commitWaiter.Wait(). This can cause BeginBlock to be fully called more than once concurrently.
  2. If a call to 'a.committer.Begin(context.Background())' takes a long time, and a third call is made, then the callers may resume out of order since Wait() does not enforce fairness/"ordering guarantees" (based on my understanding).

Check previous block height to detect retry or problem

You may want to cache the block height so you can compare whether receipt of each call is sequential. If not, then it could simply be a retry (e.g., it is the same as the in progress height received previously), etc. Minimally, perhaps pass it along to the committers so that it can be used to assert/panic or handle some sort of idempotent behavior/retry. If it is detected that it is something more than a retry, then the appropriate recovery/remediation action would need to be enacted.

Overall locking semantic

The internal locking semantic gives me pause if there is not some sort of way to ensure that the commitWaiter is unlocked due to an unexpected external failure (this may be somewhat mitigated depending on how the aforementioned height check is handled). For example, if the external control flow is interrupted (e.g. a network, or some failure, that makes it unknowable whether the call succeeded), then a retry may need to occur. If so, the call will block forever and the commitWaiter will be locked out for all future calls barring some detection mechanism to recycle the process, etc. Which then brings up the question of ensuring that only one process can be running concurrently (e.g. via a lock file, which is typically what is used, etc.).

session.go Lock out

The AtomicCommitter's inProgress value is not reset during commit and would result in erroring out all future callers if used more than once.

Toggle Gas Costs

Provide disable-gas option for config creation, setting this option will disables the Gas costs for all the transactions

If this option is provided, it creates the GenesisFile with chainID prefixed "kwil-chain-gcd" else the prefix would be "kwil-chain-gce".

If the chainID convention is not followed, the gasCosts would be enabled by default

validator module and store in ABCI application

This is a meta-issue covering multiple tasks pertaining to validators in the refactored ABCI application:

  • persistence of active validator and ongoing validator votes to disk - node restarts/recovery #97
  • a validator module, encapsulating the mgmt and transition of ValidatorUpdates away from the ABCI application

Both of the above are in progress and nearly ready for review. Since the refactor of ABCI app in fe1c007 on Wed., I've switched to a top-down prioritization, formulating the validator module from the previous machinery added by @charithabandi. This will assist with @brennanjl's parallel efforts to complete the ABCI application's composition.

PRs for both of the above sub-tasks will be created on Friday Aug 11. Module first, and store/persistence second.

I will create additional issues for the following adjacent tasks so the above can move forward without delay:

  • pkg/client bits, updated payload creation and signing. In progress, but something of a can of worms on account of the draft payloadEncoder concept and existing issues with signature schemes and address formats.
  • fixing all the existing signature and payload hashing bugs, in particular the issues we discussed on Thursday Aug 10 relating to abiguity about tx.Sender (a string type) containing either an address or in the case of validator-signed txns, an ed25519 pubkey in base64 encoding
  • using a payload encoder in the ABCI app, if we really want to, although my current feeling is that the types should implement the encoding.BinaryMarshaler and encoding.BinaryUnmarshaler interfaces instead.
  • a session mgr that impls the pkg/sessions.Committable interface for the app's AtomicCommitter
  • an endpoint for validator stats/info #92

This issue replaces #91, which was sorta closed in previous work.

enable TLS

Currently all communication is done(from kwil_cli) in insecure way, we need to enable this by default.

TxQuery rpc

Typically in blockchain, you send a Tx and you get a TxHash for later reference.
We don't have a rpc for this purpose. And we cannot test our blockchain without this rpc.

This RPC should:

  1. expect an input TxHash which is returned from earlier transaction calling
  2. return infos about the transaction referred by TxHash, a possible shape could be like below,
type TxQueryResponse struct {
	hash     []byte
	height   uint64
	tx       Tx // original tx
	txResult struct {
		code    int
		log     string // errors etc
		events  []Event
		data    []byte
		gasUsed uint64
	}
}

On implementation side, we need to:

  1. return TxHash from Broadcast rpc, this TxHash should be cometBft TxHash
  2. return necessary info from our abci module, (some of those info will be stored on chain), populate ABCI ResponseDeliverTx
  3. in kwild TxQuery grpc handler, convert ResponseDeliverTx to TxQueryResponse

Create SNAPSHOTS of Kwildb

Few requirements to satisfy:

  • Consistent: All nodes should take snapshots at the same heights and shouldn't include the changes by the concurrent writes
  • Asynchronous: Shouldn't affect the chain progress
  • Deterministic: Snapshots taken at the same height should be identical and are of same format

Approaches:

  • For now, offload the snapshot process only to the read nodes. and do sql db file copy and store it in terms of chunks
  • Or another approach is to log changesets and compact changesets to create a snapshot.

Update Validator Set on fly

API: /update_validators?val!"pubkey:power"

This can be used to add or remove a validator from the validator sets

  • To Add a new Validator: pub-key of validator and power > 0
  • To Remove a node from the Validator pub-key of validator and set power = 0

This Endpoint allows the ValidatorUpdate to be on-Chain allowing for synchronous update of the Validator set by all the nodes

EndBlock ABCI interface would be used to communicate the validatorset updates to the cometBFT core

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.