GithubHelp home page GithubHelp logo

aleonet / snarkos Goto Github PK

View Code? Open in Web Editor NEW
4.0K 4.0K 2.5K 646.29 MB

A Decentralized Operating System for ZK Applications

Home Page: http://snarkos.org

License: Apache License 2.0

Rust 97.35% Shell 1.72% HTML 0.38% JavaScript 0.55%
aleo blockchain cryptography rust zero-knowledge zksnarks

snarkos's People

Contributors

akattis avatar apruden2008 avatar christianwwwwwwww avatar collinc97 avatar d0cd avatar daniilr avatar dependabot-preview[bot] avatar dependabot[bot] avatar gakonst avatar howardwu avatar iamalwaysuncomfortable avatar joske avatar jules avatar kobigurk avatar ljedrz avatar mdelle1 avatar meshiest avatar miazn avatar niklaslong avatar protryon avatar raychu86 avatar sadroeck avatar splittydev avatar tudorpintea999 avatar vbar avatar vicsn avatar vvp avatar wcannon avatar wcannon-aleo avatar zosorock avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

snarkos's Issues

Move and rename db

Have snarkOS create a ~/.snarkOS directory and have the RocksDB file be called snarkos_db and snarkos_test_db, or something equivalent.

Merkle tree does extra hashing that increases the cost in POSW

The native Merkle tree implementation is doing extra hashing which is not crucial:

Both of them are not necessary for correctness/security (although hashing the leaves is standard practice).

If we remove these then the proof can be focused on hashing the transaction IDs, resulting in a higher possible capacity.

Restructure ledger instantiation

Currently the ledger is instantiated by ledger_storage using a genesis record serial number, genesis record commitment, and a genesis memo. A more intuitive design is to instantiate a genesis block with prepared genesis transactions, and have ledger_storage parse this genesis block and add it to the system's state.

To implement this, we will need to do the following:

  • Generate predicate_vk_hash when transactions are generated in consensus
  • Add a genesis submodule with prepared genesis transactions, and logic to construct the genesis block
  • Update LedgerStorage to initialize the system using the genesis block
  • Remove genesis_serial_number, genesis_commitment, genesis_memo, genesis_predicate_vk_bytes, and genesis_account

Update DPC outer_snark to verify inner_snark proof

After discussion with @Pratyush and Ian, we will add one proof-check circuit to the outer_snark to verify the inner_snark proof. In doing this, we can remove the inner_proof from transactions.

This change results in a 15% increase in time to produce a transaction, however it reduces transaction sizes by 20+%. This also simplifies transaction verification logic for miners and clients by removing the inner_snark proof check in software, and will result in a speedup for transaction verification.

Storage of Delegated DPC Components

The delegated DPC Implementation is currently a standalone application without permanent state/storage. New storage standards need to be put in place and the current database implementation needs to be refactored to fit the DPC protocol.

Requirements:

  • Database Redesign
  • Public Parameter Storage and Retrieval
  • Block/Transaction Storage
  • Integrate IdealLedger components

Generalize structs to not require MerkleTreeLedger and DPCTransaction

Inside the consensus package, there's a lot of places where the MerkleTreeLedger and the Tx instantiated datatypes are used. Instead, we should be using generics which conform to the Tx: Transaction, P: MerkleParameters and LedgerStorage<Tx, P> traits, so that we're able to run tests with a TestTx data structure. This would allow us to run our tests much faster and in a more scalable way, since we would be able to create blocks with transactions that do not require proving.

Examples:

  • ConsensusParameters
  • Miner
  • snarkos-network's Server (and all associated helper methods)

A lot of the tests which are written as integration tests and use a on-disk database can then be re-written as unit tests with much more extensive coverage (and quicker to run)

Must not get difficulty from the header

Currently, when checking the PoW against the difficulty target, the verification trusts that the difficulty target in the header is correctly set. A malicious peer could feed us with a block that has the wrong target. Instead, the difficulty should be obtained via self.get_block_difficulty

Flaky networking test

Master currently is broken with the following error which I presume is a flaky test:

https://travis-ci.com/github/AleoHQ/snarkOS/jobs/329788277

test server_listen::startup_handshake_stored_peers ... FAILED
failures:
---- server_listen::startup_handshake_stored_peers stdout ----
thread 'server_listen::startup_handshake_stored_peers' panicked at 'called `Result::unwrap()` on an `Err` value: ConnectError(Crate("std::io", "Os { code: 104, kind: ConnectionReset, message: \"Connection reset by peer\" }"))', network/tests/server_listen.rs:198:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
    server_listen::startup_handshake_stored_peers

Tracking: Migration of select submodules from `snarkOS` into `snarkVM`

In an effort to formalize Aleo records (for usability), reduce miner code dependencies (for security), and introduce an abstraction layer for DPC (for modularity), we are migrating select submodules from snarkOS into snarkVM.

  • algorithms
  • benchmarks
  • curves
  • dpc
  • gadgets

This migration will allow snarkVM to focus on its core functionality of facilitating predicate (application) executions, producing DPC records, and composing records into transactions. This migration will also simplify the existing dependency chain for a number of repositories currently upstream. In addition, this migration should improve snarkOS compile times significantly.

In preparation for this migration, there are a few updates snarkOS will need to make in order to ensure a smooth transition.

  • #139 - Remove Storage trait implementations
  • #140 - Remove parameters from snarkos-dpc

To complete this migration, 3 submodules in snarkOS will need a minor update, calling components from snarkVM now.

In snarkos-consensus:

  • dpc - InstantiatedDPC will be called from snarkVM
  • dpc - PublicParameters will be called from snarkVM

In snarkos-network:

  • dpc - Instantiated components of DPC will be called from snarkVM
  • dpc - InstantiatedDPC will be called from snarkVM
  • dpc - PublicParameters will be called from snarkVM

In snarkos-posw:

  • gadgets - Merkle gadget will be called from snarkVM
  • curves - BLS12-377 will be called from snarkVM

Lastly, prior to publishing snarkVM to Crates.io, to ensure Travis CI continues to run, a private dependency link will be used for continuous integration.

Dynamic test data generation

The test data in consensus/src/test_data/mod.rs is hard coded for particular parameters and DPC logic. If any of the DPC circuits change or parameters change, this data will need to be updated.

A solution to this is to add a script that regenerates this test data whenever breaking changes are made to other modules of snarkOS.

Merge uint gadgets to master

Currently leo relies on uint gadgets that are part of branch feature/integers. The changes on this branch need to merged for leo to work with the master branch of snarkOS

Fix node public IP discovery and bootnode connection

Currently nodes start up with a socket address of 0.0.0.0 or 127.0.0.1 with no knowledge of their own public IP. This creates an issue with handshake connections because sending 0.0.0.0 or 127.0.0.1 to the peer gives them no context.

A current solution is to manually set the public ip of the node in config.rs or the --ip cli option so that the node has knowledge of it's own public ip address.

There needs to be a more seamless experience for node operators to not have to manually input their public ip when starting their node.

Remove `parameters` from `snarkos-dpc`

These parameters should not live in the dpc submodule. File path references to these parameters should be removed and these parameters should only be accessible at the requisite point of use.

Remove mining in genesis block generation script

The genesis block doesn't need to be mined because the consensus module doesn't need to verify the validity of the block.

The mining logic in genesis/examples/create_genesis_block.rs can be safely removed.

Introduce `network_id` to DPC transactions

TLDR: Account public keys should reflect if the given private key is a mainnet or testnet account.

To incorporate this, we update our generation of account public keys from a commitment of three inputs, to a commitment comprised of four inputs now (in addition to the randomness input).

Our current account public key is produced as follows:

let commit_input = to_bytes![private_key.pk_sig, private_key.sk_prf, private_key.metadata]?;
let commitment = C::AccountCommitment::commit(parameters, &commit_input, &private_key.r_pk)?;

Our new account public key will be produced as follows:

let commit_input = to_bytes![private_key.pk_sig, private_key.sk_prf, private_key.metadata, private_key.is_testnet]?;
let commitment = C::AccountCommitment::commit(parameters, &commit_input, &private_key.r_pk)?;

Refactor ledger's genesis attributes

Currently LedgerStorage is instantiated from attributes in storage/src/genesis.rs. This is poorly designed, as it uses hard coded values that must be changed when parameters are updated.

The solution is to migrate these attributes to the parameters module and be generated whenever parameters are generated. This will help prevent breaking changes to storage when other modules are updated.

Minimal parameter loading

Currently SnarkOS loads both the proving and verifying by default.

This is quite inefficient. Light clients should be able to just select just the verifying parameters if they are not generating transactions or mining.

Remove `test_data` directories from all modules.

Add a new module that handles all logic currently distributed in the test_data directories in consensus, dpc, and network. This module will be imported as a dev dependency and used for tests only.

This will reduce unnecessary compilation.

Add Docker container

Create a docker image to standardize testing and runtime environment.

Potentially useful for debugging a flaky network test specific to Travis CI (see #80)

Implement macro to generate Parameters

We probably should write a macro which impls Parameters, which would be invoked like so:

impl_params!(AccountCommitmentParameters, "./account_commitment.checksum", 417868, "./account_commitment.params");

this would reduce the LoC of the parameters module and make it easier to maintain / review when changes are needed.

In that case I can see us removing all the .rs files in the module and just calling the impl macro in the lib.rs file. Then, we could also move all the .params and .sum files to a subdirectory called params (or something)

Originally posted by @gakonst in https://github.com/AleoHQ/snarkOS/pull/184

Investigate a cleaner design for masked Pedersen and Merkle tree

The design we've settled on for now is introducing a new MaskedCRHGadget with an evaluation function that receive a mask and a compute_root gadget that computes a root given a mask and a MaskedCRHGadget implementation.

A cleaner design would be to define a new masked Pedersen CRH which receives the mask at initialization time and exposes the normal evaluation functions, and so the Merkle tree would be oblivious to the fact it's using a masked CRH.

The current design of the Pedersen CRH gadget make it hard, since it expects to share the parameters between the native and gadget implementation. This in turn requires us to introduce a native implementation of the masked Pedersen CRH, which is an unnecessary complexity.

Split `DPC` into subtraits and individual structs

This step was originally intended to be performed after migration to snarkVM.

As parameters have become a blocker, we're moving up this split to facilitate the migration of parameters and testing parameters out of snarkos-dpc and into genesis and testing modules.

This split should help to ensure developers have inconvenient access to the Setup operation, and enforce system safety for Aleo runtimes.

[README] Instructions to start a node

Assuming compilation works, we need basic instructions for how to start a node and operate it. Given anticipated changes, a few steps will be sufficient for the time being.

Add transaction ID

Include a transaction hash field in the Transaction struct so it is standardized

Refactor record values and value commitments

Currently, payments are handled with a payment record and a payment circuit. This means value transfer is more of an application on the network rather than intrinsic to the protocol itself. The value commitment architecture is not entirely necessary and can be modified which will benefit circuit sizes.

A solution to this is to unify the record model to always have a value field and to verify the record values and transaction value_balance directly in the InnerCircuit.

To implement this, we will need to do the following:

  • Move value out of the record payload and make it a dedicated attribute
  • Make the PaymentCircuit a generic PredicateCircuit that doesn't handle record values
    • Remove value commitment logic from PaymentCircuit
    • Update the predicate to be a generic
    • Remove value commitment logic from OuterCircuit
  • Update InnerCircuit to handle all record value and transaction value_balance checks
  • Phase out value commitments with direct value checks in the InnerCircuit

Remove value commitment attributes from the DPC transaction

Currently the value commitments are included in the payment DPC transaction along with the binding signature and value balance. The binding signature is then verified outside the DPC circuits with the value commitments and value balance.

Payment values are hidden with these commitments, but non-payment transactions will also require garbage value commitments as obfuscation.

Possible Solution

Possible solution is to remove value commitments and binding signature from the transaction and verify the binding signature in the DPC circuit (with the value balance as the public input).

This value balance is important because it allows the consensus model to accurately assign fees for the miner, who learns nothing else from the transactions.

Note: This will decrease storage size and simplify the transactions significantly, at the cost of increased circuit size and runtime

Change snarkOS command line conventions

  • When using flags, change default convention to pass flags in from cargo run test to cargo run --
  • Change miner flag from --miner to --is-miner
  • Change all underscore convention to dash convention (e.g. --is_bootnode to --is-bootnode)

Improve Developer Experience around Public Parameters generation

Anytime the circuits are changed, the public parameters have to be regenerated. Currently, if I change the circuit and I use an old parameters file, the code will run until it tries to verify the computation, where it'll return a Core NIZK didn't verify error. This can be very frustrating and also destroy productive developer hours (because you have to wait >1h to compile + regenerate the params).

I have the following suggestions:

  1. Since checking in the inner/outer snark parameters is not an option (because they're multiple GBs in size), I recommend we check in the hash of them, and simply throw an error if the file's hash on disk does not match the checked in hash. This allows us to error fast and in a well determined place, instead of giving an unfitting error (h/t @kobigurk)
  2. For every successful PR on CI, the generated inner/outer snark params should be automatically uploaded somewhere as artifacts (maybe an S3 bucket). Pulling from master would then be followed by a wget of the params (I prefer having to download a 3GB file which I know is going to be the correct one, to re-generating everything myself each time there's a circuit change on master.)

Also, we should investigate improving I/O performance, loading the params takes a few minutes on my box. Adding Buffered readers (and writers) might do the trick, although I suspect we might have some unnecessary math operations somewhere which hurt performance. Kobi had the following idea:

Kobi Gurkan, [21.05.20 10:38]
maybe one thing we could do to improve load/save times is to use from_repr_raw and into_repr_raw instead of the normal ones when writing/reading. This would leave out using mont_reduce. We'd also be using the [u64] directly

Remove `Storage` trait implementations

Use of Storage trait for algorithms is bad engineering design.

Instead, we should be implementing a load functionality as a wrapper call for parameters that use algorithms.

Block forking and updated sync logic

Currently the consensus logic for handling blocks (DPC) is naive. There needs to be more sophisticated logic for block selection and forking when operating with many disjoint nodes.

Additional feature requirements:

  • Block caching
  • Chain forking
  • New block propagation and sync logic
  • Regular clearing of invalidated orphan blocks (based on established thresholds)

Add additional RPC support

A few features/fixes need to be added to the RPC:

  1. Direct fetching of transaction information
  2. Get block time information when fetching block info
  3. Fix get_block_template
  4. Clean up and add additional RPC tests
  5. Logic for reasoning about non-canon blocks

Merkle tree leaf hashing is done on a larger buffer than necessary

The buffer allocated to hash the leaf is larger than the leaf size in order to be able to use the hash_inner_node function: https://github.com/AleoHQ/snarkOS/blob/51b71c1cbd9eec0ccd95016a1b247ce0cd1f2375/algorithms/src/merkle_tree/merkle_tree.rs#L42

While this works for us now in POSW (since Pedersen is homomorphic and the 0s do not change the hash) and using Blake2s (since it works anyway on 512-bit chunks), we might want to change it so there's no confusion in the future.

Node is_bootnode configuration parsing fails

Steps to reproduce:

  1. Run cargo run test --is_bootnode
  2. See error - index out of bounds: the len is 0 but the index is 0

Fix:

Add check during bootnode parsing to ensure vector is non-empty.

`Connection refused` networking test failure

Unrelated to the flaky networking test we had prior, we have come across a networking issue that is unable to be replicated on macOS, but seems reproducible on EC2 ubuntu.

Running /home/ubuntu/snarkOS/target/debug/deps/server_message_handler-fa787f9700812d1f

running 15 tests
test server_message_handler::receive_get_block ... ok
test server_message_handler::receive_block_message ... ok
test server_message_handler::receive_get_memory_pool_empty ... ok
test server_message_handler::receive_get_memory_pool_normal ... ok
test server_message_handler::receive_get_peers ... ok
test server_message_handler::receive_get_sync ... ok
test server_message_handler::receive_memory_pool ... ok
test server_message_handler::receive_peers ... ok
test server_message_handler::receive_ping ... ok
test server_message_handler::receive_pong_accepted ... FAILED
test server_message_handler::receive_pong_rejected ... FAILED
test server_message_handler::receive_pong_unknown ... ok
test server_message_handler::receive_sync ... ok
test server_message_handler::receive_sync_block ... FAILED
test server_message_handler::receive_transaction ... ok

failures:

---- server_message_handler::receive_pong_accepted stdout ----
thread 'server_message_handler::receive_pong_accepted' panicked at 'called `Result::unwrap()` on an `Err` value: Crate("std::io", "Os { code: 111, kind: ConnectionRefused, message: \"Connection refused\" }")', /home/ubuntu/snarkOS/network/src/test_data/mod.rs:73:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

---- server_message_handler::receive_pong_rejected stdout ----
thread 'server_message_handler::receive_pong_rejected' panicked at 'called `Result::unwrap()` on an `Err` value: Crate("std::io", "Os { code: 111, kind: ConnectionRefused, message: \"Connection refused\" }")', /home/ubuntu/snarkOS/network/src/test_data/mod.rs:73:5

---- server_message_handler::receive_sync_block stdout ----
thread 'server_message_handler::receive_sync_block' panicked at 'called `Result::unwrap()` on an `Err` value: Crate("std::io", "Os { code: 111, kind: ConnectionRefused, message: \"Connection refused\" }")', /home/ubuntu/snarkOS/network/src/test_data/mod.rs:73:5


failures:
    server_message_handler::receive_pong_accepted
    server_message_handler::receive_pong_rejected
    server_message_handler::receive_sync_block

test result: FAILED. 12 passed; 3 failed; 0 ignored; 0 measured; 0 filtered out

error: test failed, to rerun pass '--test server_message_handler'

and

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

     Running /home/ubuntu/snarkOS/target/debug/deps/server_connection_handler-340a7b00af4d0aa5

running 6 tests
test server_connection_handler::gossiped_peer_disconnect ... ok
test server_connection_handler::gossiped_peer_connect ... ok
test server_connection_handler::memory_pool_interval ... FAILED
test server_connection_handler::peer_connect ... ok
test server_connection_handler::peer_disconnect ... ok
test server_connection_handler::sync_node_disconnect ... ok

failures:

---- server_connection_handler::memory_pool_interval stdout ----
thread 'server_connection_handler::memory_pool_interval' panicked at 'called `Result::unwrap()` on an `Err` value: Crate("std::io", "Os { code: 111, kind: ConnectionRefused, message: \"Connection refused\" }")', /home/ubuntu/snarkOS/network/src/test_data/mod.rs:73:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    server_connection_handler::memory_pool_interval

test result: FAILED. 5 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out

error: test failed, to rerun pass '--test server_connection_handler'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.