GithubHelp home page GithubHelp logo

anoma / anoma-archive Goto Github PK

View Code? Open in Web Editor NEW
425.0 57.0 177.0 475.44 MB

Reference implementation of the Anoma protocols in Rust.

Home Page: https://anoma.net

License: GNU General Public License v3.0

Rust 63.51% Makefile 0.34% Nix 36.00% Python 0.06% Shell 0.06% Dockerfile 0.03%
rust cryptography blockchain protocol consensus distributed-systems p2p

anoma-archive's Introduction

NOTE: This repository is currently being reworked. Instead, please visit the following locations, depending on your interest:


License: GPL v3 Drone CI build status

Overview

Anoma is an intent-centric, privacy-preserving protocol for decentralized counterparty discovery, solving, and multi-chain atomic settlement. To learn more about Anoma's vision, take a look at the Anoma Vision Paper.

This is an implementation of the Anoma protocol in Rust.

Warning

Here lay dragons: this codebase is still experimental, try at your own risk!

💾 Installing

There is a single command to build and install Anoma executables from source (the node, the client and the wallet). This command will also verify that a compatible version of Tendermint is available and if not, attempt to install it. Note that currently at least 16GB RAM is needed to build from source.

make install

After installation, the main anoma executable will be available on path.

To find how to use it, check out the User Guide section of the docs.

If you have Nix, you may opt to build and install Anoma using Nix. The Nix integration also takes care of making a compatible version of Tendermint available.

# Nix 2.4 and later
nix profile install

# All versions of Nix
nix-env -f . -iA anoma

For more detailed instructions and more install options, see the Install section of the User Guide.

⚙️ Development

# Build the provided validity predicate, transaction and matchmaker wasm modules
make build-wasm-scripts-docker

# Development (debug) build Anoma, which includes a validator and some default 
# accounts, whose keys and addresses are available in the wallet
ANOMA_DEV=true make

Using Nix

You may opt to get all of the dependencies to develop Anoma by entering the development shell:

# Nix 2.4 and above
nix develop

# All versions of Nix
nix-shell

Inside the shell, all of the make targets work as usual:

# Build the WASM modules without docker
make build-wasm-scripts

# Development build (uses cargo)
ANOMA_DEV=true make

It is also possible to use the Nix Rust infrastructure instead of Cargo to build the project crates. This method uses crate2nix to derive Nix expressions from Cargo.toml and Cargo.lock files. The workspace members are exposed as packages in flake.nix with a rust_ prefix. Variants where the ABCI-plus-plus feature flag is enabled are exposed with a :ABCI-plus-plus suffix.

# List all packages
nix flake show

# Build the `anoma_apps` crate with `ABCI-plus-plus` feature
nix build .#rust_anoma_apps:ABCI-plus-plus

# Build the (default) anoma package. It consists of wrappers for the Anoma
# binaries (`rust_anoma_apps`) that ensure `tendermint` is in `PATH`.
nix build .#anoma

Advantages:

  • Excellent build reproducibility (all dependencies pinned).
  • Individual crates are stored as Nix derivations and therefore cached in the Nix store.
  • Makes it possible to build Nix derivations of the binaries. Cargo build doesn't work in the Nix build environment because network access is not allowed, meaning that Cargo can't fetch dependencies; cargo vendor could be used to prefetch everything for Cargo, but cargo vendor does not work on our project at the moment.

Disadvantages:

  • Only works for Linux and Darwin targets. WASM builds in particular are not possible with this method. Although, while crate2nix doesn't support targeting WASM, we should be able to build the WASM modules via Cargo - if only cargo vendor worked.

Note: If you have modified the Cargo dependencies (changed Cargo.lock), it is necessary to recreate the Cargo.nix expressions with crate2nix. Helpers are provided as flake apps (Nix 2.4 and later):

nix run .#generateCargoNix
nix run .#generateCargoNixABCI-plus-plus

Before submitting a PR, pls make sure to run the following

# Format the code
make fmt

# Lint the code
make clippy

🧾 Logging

To change the log level, set ANOMA_LOG environment variable to one of:

  • error
  • warn
  • info
  • debug
  • trace

The default is set to info for all the modules, expect for Tendermint ABCI, which has a lot of debug logging.

For more fine-grained logging levels settings, please refer to the tracing subscriber docs for more information.

To switch on logging in tests that use #[test] macro from test_log::test, use RUST_LOG with e.g. RUST_LOG=info cargo test -- --nocapture.

How to contribute

Please see the contributing page.

Dependencies

The ledger currently requires that Tendermint version 0.34.x is installed and available on path. The pre-built binaries and the source for 0.34.8 are here, also directly available in some package managers.

This can be installed by make install command (which runs scripts/install/get_tendermint.sh script).

anoma-archive's People

Contributors

acentelles avatar adrianbrink avatar atoz7 avatar atozxx avatar awasunyin avatar batconjurer avatar celsobonutti avatar cwgoes avatar fraccaman avatar gabriella-fw avatar ggiecold avatar gnosed avatar grarco avatar james-chf avatar junkicide avatar juped avatar leontiad avatar mariari avatar memasdeligeorgakis avatar murisi avatar pablohildo avatar simonmasson avatar simsaladin avatar sribst avatar tzemanovic avatar yito88 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

anoma-archive's Issues

storage access gas metering

depends on #5

We need a way to measure storage access (reads/writes, bytes difference before and after storage update).

To start with maybe we can have a version of storage API in which each function additionally returns its gas cost (the precise values are not important for now) and bytes difference, if any.

This is to be used when accessing storage from transactions and validity predicates.

Revisit account addresses

The current Basic and Validator addresses can be removed. Instead, we should have only a single address type.

An address could be generated on chain from a e.g. hash of some counter and block hash or a nonce, first time it’s written from a transaction.

A nice address scheme could e.g. use base58 encoding with some constant prefix

[Ledger][design] accounts

The initial page is at tech-specs/src/explore/design/ledger/accounts.md

To describe:

  • account types
  • accounts data
  • accounts addresses (should we use public key hashes?)
  • accounts life-cycle (how is an account created on chain, is it deleted when its balance is empty?)
  • storage fees

┆Issue is synchronized with this Asana task by Unito

Special built-in accounts / validity predicates in the state machine

In order to benefit from the n-party settlement system, we should plug certain special accounts into the state machine, namely:

  • Validators should have special accounts which can be referenced and associated with voting power. Possibly validators can control their own fee distribution system through this mechanism (by writing a custom validity predicate).
  • The proof-of-stake "module" should have a special account, where certain parameters (inflation rate, master chain, etc.) can be controlled by a 2/3 quorum of stake (this can be enacted through the n-party settlement system with the validator accounts as above). Additional flexibility is possible - ref https://github.com/heliaxdev/rd-pm/issues/12.
  • IBC bridges may require special accounts, or the IBC modules themselves may be treated as special accounts.

┆Issue is synchronized with this Asana task by Unito

[Spec] Gossip layer - Orderbook (working name)

This is tracking issue for the gossip layer prototype. The goals and deliverables alongside with any notes will live in this repo under https://github.com/heliaxdev/rd-pm/blob/master/tech-specs/src/explore/prototypes/gossip-layer.md.

reopened from https://github.com/heliaxdev/rd-pm/issues/6

Steps:

  • repo: https://github.com/heliaxdev/anoma-prototype
  • intents
    • timestamp (creation timestamp by the client)
    • intents wasm
      • sell/buy token ID (e.g. ledger account address of the token)
      • sell/buy token amount (later we might want sell amount and a conversion rate instead)
  • intent mempool for the matchmaker (e.g. a HashMap<IntentId, Intent>), where the IntentId is hash of the intent
  • ability to submit transaction to the ledger (can re-use the client RPC call) - depends #73

[ledger] sparse merkle tree

We're currently using this library https://github.com/heliaxdev/sparse-merkle-tree/tree/tomas/encoding. We only store the hashes (H256) of the key and values in the tree (more details in https://github.com/heliaxdev/rd-pm/blob/master/tech-specs/src/explore/design/db.md). The fork has added a borsh encoding.

TODOs:

  • review the code and tests
  • review the benchmarks
  • review the memory requirements as the tree grows (quick and dirty first version at heliaxdev/sparse-merkle-tree@623cedb - we should check the serialized version and proofs sizes too)
  • can the encoding for persistent storage be made more efficient?

┆Issue is synchronized with this Asana task by Unito

Dependency health-check warnings

cargo audit reports some warnings. None of them seem of high importance to me. Nevertheless, leaving it here for more qualified people to triage:

    Fetching advisory database from `https://github.com/RustSec/advisory-db.git`
      Loaded 263 security advisories (from /home/george/.cargo/advisory-db)
    Updating crates.io index
    Scanning Cargo.lock for vulnerabilities (267 crate dependencies)
Crate:         dirs
Version:       1.0.5
Warning:       unmaintained
Title:         dirs is unmaintained, use dirs-next instead
Date:          2020-10-16
ID:            RUSTSEC-2020-0053
URL:           https://rustsec.org/advisories/RUSTSEC-2020-0053
Dependency tree: 
dirs 1.0.5
└── term 0.5.2

Crate:         net2
Version:       0.2.37
Warning:       unmaintained
Title:         `net2` crate has been deprecated; use `socket2` instead
Date:          2020-05-01
ID:            RUSTSEC-2020-0016
URL:           https://rustsec.org/advisories/RUSTSEC-2020-0016
Dependency tree: 
net2 0.2.37
├── miow 0.2.2
└── mio 0.6.23

Crate:         term
Version:       0.4.6
Warning:       unmaintained
Title:         term is looking for a new maintainer
Date:          2018-11-19
ID:            RUSTSEC-2018-0015
URL:           https://rustsec.org/advisories/RUSTSEC-2018-0015
Dependency tree: 
term 0.4.6

Crate:         term
Version:       0.5.2
Warning:       unmaintained
Title:         term is looking for a new maintainer
Date:          2018-11-19
ID:            RUSTSEC-2018-0015
URL:           https://rustsec.org/advisories/RUSTSEC-2018-0015
Dependency tree: 
term 0.5.2

Crate:         pin-project-lite
Version:       0.2.4
Warning:       yanked
Dependency tree: 
pin-project-lite 0.2.4
├── tracing 0.1.23
│   ├── tendermint-rpc 0.18.1
│   │   └── Anoma 0.1.0
│   ├── tendermint-abci 0.18.1
│   │   └── Anoma 0.1.0
│   └── hyper 0.14.4
│       └── tendermint-rpc 0.18.1
├── tokio 1.2.0
│   ├── tendermint-rpc 0.18.1
│   ├── hyper 0.14.4
│   └── Anoma 0.1.0
└── futures-util 0.3.12
    ├── hyper 0.14.4
    ├── futures-executor 0.3.12
    │   └── futures 0.3.12
    │       ├── tendermint-rpc 0.18.1
    │       ├── tendermint 0.18.1
    │       │   └── tendermint-rpc 0.18.1
    │       └── Anoma 0.1.0
    └── futures 0.3.12

warning: 5 allowed warnings found

Gracefully shutdown to clean an Anoma node

We want to shut down an Anoma node gracefully.

  • Flush the memtable on RocksDB
  • Tendermint?
  • anything else?

Currently, I got some errors when shutting down a node with Ctrl-C.

I[2021-03-11|18:36:48.047] captured interrupt, exiting...               module=main
I[2021-03-11|18:36:48.047] Stopping Node service                        module=main impl=Node
I[2021-03-11|18:36:48.048] Stopping Node                                module=main
E[2021-03-11|18:36:48.049] Stopping abci.socketClient for error: read message: EOF module=abci-client connection=consensus
I[2021-03-11|18:36:48.049] Stopping socketClient service                module=abci-client connection=consensus impl=socketClient
I[2021-03-11|18:36:48.048] Stopping EventBus service                    module=events impl=EventBus
I[2021-03-11|18:36:48.049] Stopping PubSub service                      module=pubsub impl=PubSub
I[2021-03-11|18:36:48.049] Stopping IndexerService service              module=txindex impl=IndexerService
I[2021-03-11|18:36:48.049] Stopping P2P Switch service                  module=p2p impl="P2P Switch"
I[2021-03-11|18:36:48.049] Stopping BlockchainReactor service           module=blockchain impl=BlockchainReactor
I[2021-03-11|18:36:48.049] Stopping Consensus service                   module=consensus impl=ConsensusReactor
E[2021-03-11|18:36:48.049] consensus connection terminated. Did the application crash? Please restart tendermint module=proxy err="read message: EOF"
I[2021-03-11|18:36:48.049] Stopping State service                       module=consensus impl=ConsensusState
I[2021-03-11|18:36:48.049] Stopping TimeoutTicker service               module=consensus impl=TimeoutTicker
E[2021-03-11|18:36:48.049] Stopping abci.socketClient for error: read message: EOF module=abci-client connection=query
I[2021-03-11|18:36:48.050] Stopping socketClient service                module=abci-client connection=query impl=socketClient
I[2021-03-11|18:36:48.049] Stopping baseWAL service                     module=consensus wal=.anoma/tendermint/data/cs.wal/wal impl=baseWAL
E[2021-03-11|18:36:48.050] Stopping abci.socketClient for error: read message: EOF module=abci-client connection=snapshot
E[2021-03-11|18:36:48.049] Stopping abci.socketClient for error: read message: EOF module=abci-client connection=mempool
I[2021-03-11|18:36:48.050] Stopping socketClient service                module=abci-client connection=snapshot impl=socketClient
I[2021-03-11|18:36:48.050] Stopping socketClient service                module=abci-client connection=mempool impl=socketClient

I[2021-03-11|18:36:48.065] Stopping Group service                       module=consensus wal=.anoma/tendermint/data/cs.wal/wal impl=Group
I[2021-03-11|18:36:48.066] Stopping Evidence service                    module=evidence impl=Evidence
I[2021-03-11|18:36:48.066] Stopping StateSync service                   module=statesync impl=StateSync
I[2021-03-11|18:36:48.066] Stopping PEX service                         module=pex impl=PEX
I[2021-03-11|18:36:48.066] Stopping AddrBook service                    module=p2p book=.anoma/tendermint/config/addrbook.json impl=AddrBook
I[2021-03-11|18:36:48.066] Stopping Mempool service                     module=mempool impl=Mempool
I[2021-03-11|18:36:48.066] Saving AddrBook to file                      module=p2p book=.anoma/tendermint/config/addrbook.json size=0
E[2021-03-11|18:36:48.066] Stopped accept routine, as transport is closed module=p2p numPeers=0
I[2021-03-11|18:36:48.066] Closing rpc listener                         module=main listener="&{Listener:0xc0000b2678 sem:0xc00010aae0 closeOnce:{done:0 m:{state:0 sema:0}} done:0xc00010ab40}"
I[2021-03-11|18:36:48.066] RPC HTTP server stopped                      module=rpc-server err="accept tcp 127.0.0.1:26657: use of closed network connection"

Improve logging

This is half research, half coding task.

The relevant spec page is at https://github.com/heliaxdev/rd-pm/blob/master/tech-specs/src/explore/libraries/logging.md

We currently have env_logger in place (albeit used sporadically), which is simple and works, but we'll probably want to switch to either slog or tracing.

Steps:

  • compare the main differences of the two options in the spec page
  • replace env_logger with the choice
  • improve logging usage in code
  • add another env var to set tendermint log level (currently it inherits the ANOMA_LOG)
  • set the default log level to info

What to do/add in the CI

It would be nice to have a CI !
Lets use this issue to report all stuff we want the CI to check

[Ledger][design] validity predicates

The initial page is at tech-specs/src/explore/design/ledger/vp.md

To describe:

  • possible implementations
  • how are the transactions, the state storage and state updates to be validated presented to VPs?
  • gas accounting?

┆Issue is synchronized with this Asana task by Unito

add gas metering for transaction application

We need a way to measure gas cost of transactions and accumulate the total gas cost of a transaction. We should build an interface for a gas counter.

Everything that happens in a transaction application should be accounted for, e.g.:

  • some base cost
  • the size of its bytes
  • runtime cost - to be done by #14
  • storage access to be done by #64

add gas metering for running transactions code and validity predicates

we can use https://github.com/wasmerio/wasmer/tree/master/lib/middlewares, an example:

alternatively, there's also https://crates.io/crates/pwasm-utils, with an example how it's used in:

Steps:

  • check differences between the two options (or any other), safety is important - it shouldn't be possible to get around gas metering or perform computations with no or disproportionate gas costs (this is noted in the design spec: http://localhost:3000/explore/design/ledger/wasm-vm.html: "TODO: review wasmer gas metering, are there any loop-holes that could potentially escape metering?")
  • implement gas metering to measure running transactions code and VPs (for now unspecified units, unsigned int)
  • allow to set some gas limit on the gas meter
  • inject the gas metering into the wasm script before they run (it's more efficient to meter the gas inside the wasm calls, instead of relying on host calls)
  • allow to obtain the measured gas in the shell after the script call is done

add wasm VM memory implementation

Some parts of this depend on https://github.com/heliaxdev/rd-pm/pull/37 (what data will be passed from transactions to VPs and the wasm environment APIs).

This guide is a good intro to wasm memory: https://radu-matei.com/blog/practical-guide-to-wasm-memory/#exchanging-strings-between-modules-and-runtimes

steps:

  • research and decide for an option how to pass the data via memory
    • some options to consider:
      • using "C" structures
      • (de)serializing the data with some encoding (JSON, binary)
      • any other?
    • the choice should allow for easy usage in wasm for users (e.g. in Rust a bindgen macro on data structures)
  • allow to share data bi-directionally between the host (Rust shell) and the guest (wasm)
    • tx:
      • host-to-guest: pass tx.data to tx code call
      • guest-to-host: receive parameters from environment calls and storage modifications (pending on storage API) and send back the results, if any
    • VP:
      • host-to-guest: pass tx.data, prior and posterior account storage sub-space state and/or storage modifications for the account
      • guest-to-host: receive parameters from environment calls and send back the results, if any
  • handle memory access safely (has TODO in-code ledger/vm/src/lib.rs:140)
  • handle passing the values to the wasm calls and out of their results (has TODO in-code ledger/vm/src/lib.rs:154)
  • consider sharing the data in wasm memory from tx wasm to VPs to avoid a pass-through the host? (has TODO in-code ledger/vm/src/lib.rs:35)
    • I'm not sure if this is feasible, because a transaction can modify multiple accounts, but each account's VP should only see the modifications for its storage

DB keys schema

The initial database is using a simple tree schema: https://github.com/heliaxdev/anoma-prototype/blob/6c59f0dc2c8859ce564a37e5923d1a87e7867ffa/ledger/src/bin/anoma-node/shell/storage/db.rs#L3

The h/balance/address key is temporary. Instead of balance, accounts will have validity predicates and some generic storage subspace, an address and a counter (pending on https://github.com/heliaxdev/rd-pm/issues/25)

The keys are built from key segments (KeySeg trait https://github.com/heliaxdev/anoma-prototype/blob/6c59f0dc2c8859ce564a37e5923d1a87e7867ffa/ledger/src/bin/anoma-node/shell/storage/types.rs#L61)

A better key schema options should be explored. Ideally, the keys should be std::cmp::Ord and key segments for custom types should have the same order as their raw data (e.g. Address).

If the keys can be made of only integers (by e.g. having 1-to-1 mapping of key segments to integers of a fixed size, then the default key comparator can be used (https://github.com/facebook/rocksdb/wiki/Basic-Operations#comparators). Otherwise, a custom key comparator should be provided to rocksdb options, to fix https://github.com/heliaxdev/anoma-prototype/blob/6c59f0dc2c8859ce564a37e5923d1a87e7867ffa/ledger/src/bin/anoma-node/shell/storage/db.rs#L212

update-able validity predicates

depends on #16

Make it possible to update VPs on chain via a transaction, which should be validated by the current VP version (before the update)

Combinatorial auctions for block space

A further complication for gas metering is that transactions will start out encrypted if we use a DKG/TPKE, and validators will have to commit to an execution order prior to decryption (this is what prevents front-running). Thus there will need to be a separate (not-encrypted) "gas payer" account which can be checked prior to committing to executing the transaction (to prevent DoS).

One way to simplify this logic, and to provide more predictability for transaction execution times / guaranteed throughput for particular applications, would be to use the n-party settlement system in conjunction with validator accounts to run combinatorial auctions for future block space, where users or relayers can bid for some amount of block space (storage & compute) across time, and validators can allocate that space in advance (for incentive compatibility, so that validators actually execute the transactions, the bid amounts would be put in escrow & only half paid out if no transactions are actually included in the allocated space).

I haven't thought through any secondary consequences, so we should think about this carefully.

┆Issue is synchronized with this Asana task by Unito

wasm env

depends on anoma/anoma#16

Steps:

  • Common wasm env (usable in both transaction codes and VPs):
    • logging #19
    • storage read-only API #20
    • math & crypto (needs some more design details)
      • ed25519 is being added in anoma/anoma#160
      • anoma/namada#4
      • anoma/anoma#225
      • anoma/anoma#163
    • panics & aborts (also needs some more design details)
      • for panics, we probably just want to add [profile.release] panic = "abort" to Cargo.toml, which tells to compiler to simply abort on panics
      • we can also add host functions panic for explicit panics from wasm not needed, panic! macro works
  • Transaction env:
    • storage write access for all public state anoma/anoma#82
    • initialize a new account anoma/anoma#102
  • Validity predicate env:
    • storage read access to account's sub-space for prior and posterior state (depends on anoma/anoma#82)
  • wasm env injection anoma/anoma#79
  • add eval-like function to VP wasm anoma/anoma#165

┆Issue is synchronized with this Asana task by Unito

when ran via `cargo run`, run anoma sub-commands via `cargo run` too

The TODO marker at ledger/src/bin/anoma/cli.rs:20:

base ledger prototype version 2

This is tracking issue for the base ledger prototype version 2 (a follow-up to base ledger prototype version 1 https://github.com/heliaxdev/rd-pm/issues/5).

I think if we're happy with prototype version 2, we could start the next phase described in https://github.com/heliaxdev/anoma-prototype/tree/master/tech-specs/src/explore/prototypes#advancing-a-successful-prototype

Let's agree on prototype plan anoma/anoma#63 first.

Steps:

  • storage
    • anoma/anoma#6
    • anoma/anoma#4 - non-blocking for now
    • anoma/anoma#5
    • anoma/anoma#64
  • wasm
    • anoma/anoma#16
    • anoma/anoma#21 (tracking issue with sub-steps)
    • anoma/anoma#14 (gas)
    • anoma/anoma#18 (gas)
    • anoma/anoma#136 (gas)
    • anoma/anoma#15
    • anoma/anoma#17
    • anoma/anoma#22 - we have basic wasm validation sufficient for now, but the issue is left open for improvements
    • validity predicates
      • anoma/namada#3 - we can skip for now as without PoS the validators VPs cannot do much
      • anoma/anoma#67
      • anoma/anoma#68
    • potentially anoma/anoma#24
  • For the orderbook
    • anoma/anoma#73
    • anoma/anoma#35
  • anoma/anoma#10
  • gas counter anoma/anoma#65
  • anoma/anoma#111 VP gas
  • anoma/anoma#23
  • anoma/anoma#66
  • anoma/anoma#69
  • anoma/anoma#71
  • anoma/anoma#75
  • anoma/anoma#101
  • anoma/anoma#104
  • potentially anoma/anoma#72

[Ledger][design] transaction execution

The initial page is at tech-specs/src/explore/design/ledger/tx-execution.md

To describe:

  • transaction types
  • transaction's arbitrary code
  • how are the state updates represented?
  • validity predicate checks
  • commit, if all VPs accept it, or drop a transaction

┆Issue is synchronized with this Asana task by Unito

Intent description

The initial page is at tech-specs/src/explore/design/gossip/intent.md

to describe

  • intent type
  • relation with transaction
  • time to live logic
  • ...

┆Issue is synchronized with this Asana task by Unito

Modify intents when gossiping to other nodes

In order to implement the incentive logic it's mandatory to modify the intent a certain way before gossiping to other node

this part should allow us to implement such a thing:

https://docs.rs/libp2p-gossipsub/0.28.0/libp2p_gossipsub/trait.DataTransform.html

The gossipsub behaviour needs a message_id for each propageded msg so the message_id should stay the same output after the modification of the intent. This allow us to fully use the the gossipsub network, where each nodes can request a specific message by that id.

┆Issue is synchronized with this Asana task by Unito

Fee design & per-contract validator set fee negotation

To prevent spam and appropriately price execution costs, the base ledger's trade settlement system must charge fees proportional to the database read/write and compute costs incurred by transaction execution (all phases). Fees merely proportional to transaction execution costs, however, misalign long-term interests of system users and stakeholders, since proof-of-stake consensus security must be greater than (and thus proportional to) the amount of value transacted, since this is what could be gained by subverting intended operations (e.g. by bribing validators). For this reason, we should aim to architect a fee model which is at least in part proportional to the value of a settled trade (more similar to existing custody institutions, in a sense). That way expected long-term value accrual of the staking token will end up being proportional to trade volume (in an approximate sense) instead of execution volume, which is more likely to fund the requisite security.

Note: One could object to this argument on the basis that transaction fees will converge to trade value as block space becomes scarce - as one sees happening on Ethereum at the moment, to some extent - however we do not wish to rely on this, as it requires constraining system throughput (artificially, assuming we could do otherwise) and thus also prevents many otherwise-possible (and otherwise fee-paying) transactions from being settled.

What exactly the proportion is we can determine later - likely it will be quite small - but the tricky part from the design perspective is to measure the value of a trade in the first place. This problem certainly cannot be solved in general for a ledger which supports arbitrary state transitions and places no constraints on data semantics (i.e. value), and privacy features exacerbate the difficulty, e.g. private MASP transactions provide no information about the value transacted, and any attempt at a proportional charge by circuit change will result in data leakage if the fraction is known and the fee is sent to a public address. That said, it is much less critical to charge value-proportionally for transfers (though we should keep thinking about this), and much critical to charge value-proportionally for trades, which may often have partially public data (e.g. price, amount, tokens) by necessity of counterparty negotiation at the intent discovery layer. If there is some roughly proportional relation between trade volume and value of custodied assets, charging proportionally for most trades should be sufficient to achieve the value-capture / security proportionality which we need (this is a hand-wavy argument that should be formalised).

To that end, keeping in mind the additional design constraints of avoiding hardcoded order semantics on the base ledger and assuming that market conditions will change rapidly, one idea I have is to enact a kind of collective negotiation between the validator set and particular contracts (with particular validity predicates) on a sort of "fee-sharing" based on the particular semantics of that contract. The basic setup is as follows:

  1. The "validator set", as a collective entity, has a special validity predicate on the ledger which is called for every transaction and enforces fee payments. This validity predicate code is mutable and can be altered at any time by a 2/3 majority vote of the validator set (this does not change the security model, since 2/3 of validators can sign anything anyways, and they can also threshold commit with individual validity predicates, no complex procedure is necessary).
  2. When a new contract becomes relatively popular, the validator set can inspect this contract and choose a fee sharing model based on its particular semantics. For example, suppose an AMM contract - maybe even partially private - starts becoming popular. The validator set reads the contract and notes how it calculates fees for each trade.
  3. The validator set then alters their collective validity predicate to require some trade-value-proportional fee sharing (in this example, that 1% of the fee paid to the AMM LP is instead directed to the stakers). The validator set may also elect to reduce or eliminate computational-cost-associated fees if they can be bounded in advance.
  4. Transactions which use this contract (e.g. AMM) are now subject to the new fee requirements (which clients must track).

This process can happen quite rapidly and requires no ledger upgrades, although we should keep UX in mind. Another point to consider is the particulars of how contracts ought to "fee share" and whether they need to build in some flexibility in advance with this system in mind.

This is just brainstorming, thoughts welcome.

┆Issue is synchronized with this Asana task by Unito

Cross-chain trade settlement

Cross-chain settlement will be essential for dealing with assets originating on other chains and interfacing between local instances of the Anoma protocol.

There are several methods, which are not mutually exclusive, but a single method will likely make most sense for a class of use-cases.

  • First method: transfer-to-Anoma
    • Advantage: efficient (once transferred)
    • Disadvantage: users must transfer assets
    • Steps
      • Transfer assets over IBC
      • Trade on Anoma chain
      • Transfer assets back (if desired) once trading complete
  • Second method: lock-to-Anoma-chain
    • Advantage: more efficient, generalised
    • Disadvantage: users must lock assets
    • Steps
      • Verify state of other chains on Anoma
      • Only Anoma chain can now change state
      • Atomically execute Anoma order to change state on Anoma chain
      • Relay proofs to other chains
      • Other chains can now change state
      • Anoma chain prevents double-spending conflicting orders
  • Third method: optimistic concurrency
    • Advantage: users need not lock assets
    • Disadvantage: less efficient, requires escrow step
    • Steps
      • Verify state of other chains on Anoma
      • State of other chains can be changed (e.g. by users)
      • Execute Anoma order, relay state changes
      • Wait for proof of asset escrow
      • If timeout, proof of escrow revert, relay to other chains
      • If escrow successful, Anoma chain finalises order
      • Assume eventual liveness

┆Issue is synchronized with this Asana task by Unito

add stack height metering for wasm VM

related issue #14

Similarly to #14, we can use pwasm-utils as in https://github.com/near/nearcore/blob/b973883a7fcf5a4534248c2bad89c9825bac019f/runtime/near-vm-runner/src/prepare.rs#L68 and use it to limit the stack height with some configurable variable.

steps:

  • can we do the same (prevent stack overflows) with wasmer, or any other option?
  • should we abort when the limit is exceeded? The result should be deterministic
  • allow to set some stack height limit
  • inject stack height metering in wasm code that limits the stack height

refactor config

  • Refactor the configuration in ledger/src/lib/config.rs using https://crates.io/crates/config
  • Refactor ledger/src/bin/anoma-node/gossip/config.rs from #26
  • move the hard-coded tendermint address into config (ledger/src/bin/anoma-node/shell/mod.rs:34 and src/bin/anoma-client/cli.rs)

Efficient and durable write on RocksDB

The current implementation flushes the memtable per putting data like the below.
https://github.com/heliaxdev/anoma-prototype/blob/b6086d8f67a1acbc771c009e1f9a7cbfca8f85fa/ledger/src/bin/anoma-node/shell/storage/db.rs#L108
We can remove the db.flush() and give WriteOptions by using db.put_opt(), db.write_opt() to sync the WAL to the disk.
https://github.com/facebook/rocksdb/wiki/Basic-Operations#synchronous-writes
https://docs.rs/rocksdb/0.15.0/rocksdb/struct.WriteOptions.html#method.set_sync

RocksDB (and LSMTree based DB) can buffer data on the memtable for efficient large write to the disk. That's flush to write data on the memtable to an SSTable on the disk.
The memtable is on the memory, but the DB stores WAL to the disk at the same time.
WAL can be used to recover data after the DB crashes.
That's why we have to just only wait for storing WAL to the disk for our requirements.

https://github.com/facebook/rocksdb/wiki/RocksDB-Overview#3-high-level-architecture

Combinatorial auction proof-of-stake

If we want to, we can write a very flexible combinatorial auction proof-of-stake system using our account paradigm:

  • Each validator has an account which can accept slashing conditions & lockup in return for promise of fees
    • Fees can depend on signatures (consensus participation) as verified by the protocol
  • Stakeholders (by 2/3 majority) can agree on what fees to offer for what lockup periods / slashing conditions

If implemented this way, this is very flexible because the details (e.g. unbonding period) are no longer "hard-coded", so e.g.

  • Different validators can agree to different unbonding periods (longer unbonding ~ more fees)
  • Inflation can vary over time according to buy-sell demand curve of validators & tokens
  • Atomic switch to accepting validator set updates from another chain (e.g. over IBC) probably becomes easier

This may be too complex, and certainly some fixed assumptions (e.g. minimum unbonding period) are helpful for other chains interacting with Anoma (running light clients), we don't necessarily want everything to be dynamically settled. Merits discussion.

┆Issue is synchronized with this Asana task by Unito

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.