aleonet / snarkos Goto Github PK
View Code? Open in Web Editor NEWA Decentralized Operating System for ZK Applications
Home Page: http://snarkos.org
License: Apache License 2.0
A Decentralized Operating System for ZK Applications
Home Page: http://snarkos.org
License: Apache License 2.0
Have snarkOS create a ~/.snarkOS directory and have the RocksDB file be called snarkos_db and snarkos_test_db, or something equivalent.
The native Merkle tree implementation is doing extra hashing which is not crucial:
Both of them are not necessary for correctness/security (although hashing the leaves is standard practice).
If we remove these then the proof can be focused on hashing the transaction IDs, resulting in a higher possible capacity.
Currently the ledger is instantiated by ledger_storage
using a genesis record serial number, genesis record commitment, and a genesis memo. A more intuitive design is to instantiate a genesis block with prepared genesis transactions, and have ledger_storage
parse this genesis block and add it to the system's state.
To implement this, we will need to do the following:
predicate_vk_hash
when transactions are generated in consensus
genesis
submodule with prepared genesis transactions, and logic to construct the genesis blockLedgerStorage
to initialize the system using the genesis blockgenesis_serial_number
, genesis_commitment
, genesis_memo
, genesis_predicate_vk_bytes
, and genesis_account
Tracing provides great logging support with various levels of verbosity along with contexts, instead of using println calls. Example of past integration:
In preparation for repo migration, we should move the Merkle tree data structure to its proper place in snarkOS-objects
and move the Merkle tree gadget in snarkOS-gadgets
from its current algorithms
subfolder into a new objects
subfolder.
After discussion with @Pratyush and Ian, we will add one proof-check circuit to the outer_snark to verify the inner_snark proof. In doing this, we can remove the inner_proof from transactions.
This change results in a 15% increase in time to produce a transaction, however it reduces transaction sizes by 20+%. This also simplifies transaction verification logic for miners and clients by removing the inner_snark proof check in software, and will result in a speedup for transaction verification.
The delegated DPC Implementation is currently a standalone application without permanent state/storage. New storage standards need to be put in place and the current database implementation needs to be refactored to fit the DPC protocol.
Requirements:
IdealLedger
componentsInside the consensus
package, there's a lot of places where the MerkleTreeLedger
and the Tx
instantiated datatypes are used. Instead, we should be using generics which conform to the Tx: Transaction
, P: MerkleParameters
and LedgerStorage<Tx, P>
traits, so that we're able to run tests with a TestTx
data structure. This would allow us to run our tests much faster and in a more scalable way, since we would be able to create blocks with transactions that do not require proving.
Examples:
ConsensusParameters
Miner
snarkos-network
's Server
(and all associated helper methods)A lot of the tests which are written as integration tests and use a on-disk database can then be re-written as unit tests with much more extensive coverage (and quicker to run)
Currently, when checking the PoW against the difficulty target, the verification trusts that the difficulty target in the header is correctly set. A malicious peer could feed us with a block that has the wrong target. Instead, the difficulty should be obtained via self.get_block_difficulty
Master currently is broken with the following error which I presume is a flaky test:
https://travis-ci.com/github/AleoHQ/snarkOS/jobs/329788277
test server_listen::startup_handshake_stored_peers ... FAILED
failures:
---- server_listen::startup_handshake_stored_peers stdout ----
thread 'server_listen::startup_handshake_stored_peers' panicked at 'called `Result::unwrap()` on an `Err` value: ConnectError(Crate("std::io", "Os { code: 104, kind: ConnectionReset, message: \"Connection reset by peer\" }"))', network/tests/server_listen.rs:198:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
server_listen::startup_handshake_stored_peers
In an effort to formalize Aleo records (for usability), reduce miner code dependencies (for security), and introduce an abstraction layer for DPC (for modularity), we are migrating select submodules from snarkOS
into snarkVM
.
algorithms
benchmarks
curves
dpc
gadgets
This migration will allow snarkVM
to focus on its core functionality of facilitating predicate (application) executions, producing DPC records, and composing records into transactions. This migration will also simplify the existing dependency chain for a number of repositories currently upstream. In addition, this migration should improve snarkOS
compile times significantly.
In preparation for this migration, there are a few updates snarkOS will need to make in order to ensure a smooth transition.
To complete this migration, 3 submodules in snarkOS
will need a minor update, calling components from snarkVM
now.
In snarkos-consensus
:
dpc
- InstantiatedDPC
will be called from snarkVM
dpc
- PublicParameters
will be called from snarkVM
In snarkos-network
:
dpc
- Instantiated components of DPC will be called from snarkVM
dpc
- InstantiatedDPC
will be called from snarkVM
dpc
- PublicParameters
will be called from snarkVM
In snarkos-posw
:
gadgets
- Merkle gadget
will be called from snarkVM
curves
- BLS12-377
will be called from snarkVM
Lastly, prior to publishing snarkVM
to Crates.io, to ensure Travis CI continues to run, a private dependency link will be used for continuous integration.
The test data in consensus/src/test_data/mod.rs
is hard coded for particular parameters and DPC logic. If any of the DPC circuits change or parameters change, this data will need to be updated.
A solution to this is to add a script that regenerates this test data whenever breaking changes are made to other modules of snarkOS.
Currently leo relies on uint gadgets that are part of branch feature/integers
. The changes on this branch need to merged for leo to work with the master branch of snarkOS
Currently nodes start up with a socket address of 0.0.0.0
or 127.0.0.1
with no knowledge of their own public IP. This creates an issue with handshake connections because sending 0.0.0.0
or 127.0.0.1
to the peer gives them no context.
A current solution is to manually set the public ip of the node in config.rs
or the --ip
cli option so that the node has knowledge of it's own public ip address.
There needs to be a more seamless experience for node operators to not have to manually input their public ip when starting their node.
These parameters should not live in the dpc
submodule. File path references to these parameters should be removed and these parameters should only be accessible at the requisite point of use.
The genesis block doesn't need to be mined because the consensus
module doesn't need to verify the validity of the block.
The mining logic in genesis/examples/create_genesis_block.rs
can be safely removed.
TLDR: Account public keys should reflect if the given private key is a mainnet or testnet account.
To incorporate this, we update our generation of account public keys from a commitment of three inputs, to a commitment comprised of four inputs now (in addition to the randomness input).
Our current account public key is produced as follows:
let commit_input = to_bytes![private_key.pk_sig, private_key.sk_prf, private_key.metadata]?;
let commitment = C::AccountCommitment::commit(parameters, &commit_input, &private_key.r_pk)?;
Our new account public key will be produced as follows:
let commit_input = to_bytes![private_key.pk_sig, private_key.sk_prf, private_key.metadata, private_key.is_testnet]?;
let commitment = C::AccountCommitment::commit(parameters, &commit_input, &private_key.r_pk)?;
Currently LedgerStorage
is instantiated from attributes in storage/src/genesis.rs
. This is poorly designed, as it uses hard coded values that must be changed when parameters are updated.
The solution is to migrate these attributes to the parameters
module and be generated whenever parameters are generated. This will help prevent breaking changes to storage
when other modules are updated.
Currently SnarkOS loads both the proving and verifying by default.
This is quite inefficient. Light clients should be able to just select just the verifying parameters if they are not generating transactions or mining.
Add a new module that handles all logic currently distributed in the test_data
directories in consensus
, dpc
, and network
. This module will be imported as a dev dependency and used for tests only.
This will reduce unnecessary compilation.
Create a docker image to standardize testing and runtime environment.
Potentially useful for debugging a flaky network test specific to Travis CI (see #80)
We probably should write a macro which impls Parameters, which would be invoked like so:
impl_params!(AccountCommitmentParameters, "./account_commitment.checksum", 417868, "./account_commitment.params");
this would reduce the LoC of the parameters module and make it easier to maintain / review when changes are needed.
In that case I can see us removing all the .rs
files in the module and just calling the impl
macro in the lib.rs
file. Then, we could also move all the .params
and .sum
files to a subdirectory called params
(or something)
Originally posted by @gakonst in https://github.com/AleoHQ/snarkOS/pull/184
The design we've settled on for now is introducing a new MaskedCRHGadget
with an evaluation function that receive a mask
and a compute_root
gadget that computes a root given a mask
and a MaskedCRHGadget
implementation.
A cleaner design would be to define a new masked Pedersen CRH which receives the mask at initialization time and exposes the normal evaluation functions, and so the Merkle tree would be oblivious to the fact it's using a masked CRH.
The current design of the Pedersen CRH gadget make it hard, since it expects to share the parameters between the native and gadget implementation. This in turn requires us to introduce a native implementation of the masked Pedersen CRH, which is an unnecessary complexity.
This step was originally intended to be performed after migration to snarkVM
.
As parameters have become a blocker, we're moving up this split to facilitate the migration of parameters and testing parameters out of snarkos-dpc
and into genesis
and testing
modules.
This split should help to ensure developers have inconvenient access to the Setup
operation, and enforce system safety for Aleo runtimes.
Assuming compilation works, we need basic instructions for how to start a node and operate it. Given anticipated changes, a few steps will be sufficient for the time being.
Include a transaction hash field in the Transaction
struct so it is standardized
Currently, payments are handled with a payment record and a payment circuit. This means value transfer is more of an application on the network rather than intrinsic to the protocol itself. The value commitment architecture is not entirely necessary and can be modified which will benefit circuit sizes.
A solution to this is to unify the record model to always have a value field and to verify the record values and transaction value_balance
directly in the InnerCircuit
.
To implement this, we will need to do the following:
value
out of the record payload and make it a dedicated attributePaymentCircuit
a generic PredicateCircuit
that doesn't handle record values
PaymentCircuit
OuterCircuit
InnerCircuit
to handle all record value
and transaction value_balance
checksInnerCircuit
The tests should remove databases that get constructed, and/or use in memory databases.
Currently the value commitments are included in the payment DPC transaction along with the binding signature and value balance. The binding signature is then verified outside the DPC circuits with the value commitments and value balance.
Payment values are hidden with these commitments, but non-payment transactions will also require garbage value commitments as obfuscation.
Possible solution is to remove value commitments and binding signature from the transaction and verify the binding signature in the DPC circuit (with the value balance as the public input).
This value balance is important because it allows the consensus model to accurately assign fees for the miner, who learns nothing else from the transactions.
Note: This will decrease storage size and simplify the transactions significantly, at the cost of increased circuit size and runtime
Cross-linking this issue here: arkworks-rs/snark#152 (comment)
In our infrastructure, we will need to add an implementation here:
https://github.com/AleoHQ/snarkOS/tree/master/curves/src
cargo run test
to cargo run --
--miner
to --is-miner
--is_bootnode
to --is-bootnode
)Anytime the circuits are changed, the public parameters have to be regenerated. Currently, if I change the circuit and I use an old parameters file, the code will run until it tries to verify the computation, where it'll return a Core NIZK didn't verify
error. This can be very frustrating and also destroy productive developer hours (because you have to wait >1h to compile + regenerate the params).
I have the following suggestions:
wget
of the params (I prefer having to download a 3GB file which I know is going to be the correct one, to re-generating everything myself each time there's a circuit change on master.)Also, we should investigate improving I/O performance, loading the params takes a few minutes on my box. Adding Buffered readers (and writers) might do the trick, although I suspect we might have some unnecessary math operations somewhere which hurt performance. Kobi had the following idea:
Kobi Gurkan, [21.05.20 10:38]
maybe one thing we could do to improve load/save times is to use from_repr_raw and into_repr_raw instead of the normal ones when writing/reading. This would leave out using mont_reduce. We'd also be using the [u64] directly
Use of Storage
trait for algorithms is bad engineering design.
Instead, we should be implementing a load functionality as a wrapper call for parameters that use algorithms.
Currently loading parameters by reading a File
at a specific PathBuf
.
The logic should be updated to utilize the include_bytes
macro.
Currently the consensus logic for handling blocks (DPC) is naive. There needs to be more sophisticated logic for block selection and forking when operating with many disjoint nodes.
Additional feature requirements:
A few features/fixes need to be added to the RPC:
The buffer allocated to hash the leaf is larger than the leaf size in order to be able to use the hash_inner_node function: https://github.com/AleoHQ/snarkOS/blob/51b71c1cbd9eec0ccd95016a1b247ce0cd1f2375/algorithms/src/merkle_tree/merkle_tree.rs#L42
While this works for us now in POSW (since Pedersen is homomorphic and the 0s do not change the hash) and using Blake2s (since it works anyway on 512-bit chunks), we might want to change it so there's no confusion in the future.
The Schnorr signature randomize public key gadget fails to produce a valid randomized public key in the gadget domain.
Steps to reproduce:
cargo run test --is_bootnode
index out of bounds: the len is 0 but the index is 0
Fix:
Add check during bootnode parsing to ensure vector is non-empty.
Unrelated to the flaky networking test we had prior, we have come across a networking issue that is unable to be replicated on macOS, but seems reproducible on EC2 ubuntu.
Running /home/ubuntu/snarkOS/target/debug/deps/server_message_handler-fa787f9700812d1f
running 15 tests
test server_message_handler::receive_get_block ... ok
test server_message_handler::receive_block_message ... ok
test server_message_handler::receive_get_memory_pool_empty ... ok
test server_message_handler::receive_get_memory_pool_normal ... ok
test server_message_handler::receive_get_peers ... ok
test server_message_handler::receive_get_sync ... ok
test server_message_handler::receive_memory_pool ... ok
test server_message_handler::receive_peers ... ok
test server_message_handler::receive_ping ... ok
test server_message_handler::receive_pong_accepted ... FAILED
test server_message_handler::receive_pong_rejected ... FAILED
test server_message_handler::receive_pong_unknown ... ok
test server_message_handler::receive_sync ... ok
test server_message_handler::receive_sync_block ... FAILED
test server_message_handler::receive_transaction ... ok
failures:
---- server_message_handler::receive_pong_accepted stdout ----
thread 'server_message_handler::receive_pong_accepted' panicked at 'called `Result::unwrap()` on an `Err` value: Crate("std::io", "Os { code: 111, kind: ConnectionRefused, message: \"Connection refused\" }")', /home/ubuntu/snarkOS/network/src/test_data/mod.rs:73:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
---- server_message_handler::receive_pong_rejected stdout ----
thread 'server_message_handler::receive_pong_rejected' panicked at 'called `Result::unwrap()` on an `Err` value: Crate("std::io", "Os { code: 111, kind: ConnectionRefused, message: \"Connection refused\" }")', /home/ubuntu/snarkOS/network/src/test_data/mod.rs:73:5
---- server_message_handler::receive_sync_block stdout ----
thread 'server_message_handler::receive_sync_block' panicked at 'called `Result::unwrap()` on an `Err` value: Crate("std::io", "Os { code: 111, kind: ConnectionRefused, message: \"Connection refused\" }")', /home/ubuntu/snarkOS/network/src/test_data/mod.rs:73:5
failures:
server_message_handler::receive_pong_accepted
server_message_handler::receive_pong_rejected
server_message_handler::receive_sync_block
test result: FAILED. 12 passed; 3 failed; 0 ignored; 0 measured; 0 filtered out
error: test failed, to rerun pass '--test server_message_handler'
and
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running /home/ubuntu/snarkOS/target/debug/deps/server_connection_handler-340a7b00af4d0aa5
running 6 tests
test server_connection_handler::gossiped_peer_disconnect ... ok
test server_connection_handler::gossiped_peer_connect ... ok
test server_connection_handler::memory_pool_interval ... FAILED
test server_connection_handler::peer_connect ... ok
test server_connection_handler::peer_disconnect ... ok
test server_connection_handler::sync_node_disconnect ... ok
failures:
---- server_connection_handler::memory_pool_interval stdout ----
thread 'server_connection_handler::memory_pool_interval' panicked at 'called `Result::unwrap()` on an `Err` value: Crate("std::io", "Os { code: 111, kind: ConnectionRefused, message: \"Connection refused\" }")', /home/ubuntu/snarkOS/network/src/test_data/mod.rs:73:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
server_connection_handler::memory_pool_interval
test result: FAILED. 5 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out
error: test failed, to rerun pass '--test server_connection_handler'
Currently we load it as an empty vec here, so there should be no need to have the file path in the code: https://github.com/AleoHQ/snarkOS/blob/master/dpc/src/dpc/base_dpc/parameters.rs#L259
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.