GithubHelp home page GithubHelp logo

cosmwasm / cosmwasm Goto Github PK

View Code? Open in Web Editor NEW
1.0K 39.0 322.0 20.8 MB

Framework for building smart contracts in Wasm for the Cosmos SDK

Home Page: https://www.cosmwasm.com/

License: Apache License 2.0

Rust 98.56% Shell 0.26% Python 0.10% Go 1.09%

cosmwasm's Introduction

CosmWasm

CircleCI

WebAssembly Smart Contracts for the Cosmos SDK.

Packages

The following packages are maintained here:

Crate Usage Download Docs Coverage
cosmwasm-crypto Internal only cosmwasm-crypto on crates.io Docs Coverage
cosmwasm-derive Internal only cosmwasm-derive on crates.io Docs Coverage
cosmwasm-schema Contract development cosmwasm-schema on crates.io Docs Coverage
cosmwasm-core Internal only cosmwasm-core on crates.io Docs Coverage
cosmwasm-std Contract development cosmwasm-std on crates.io Docs Coverage
cosmwasm-vm Host environments cosmwasm-vm on crates.io Docs Coverage
cosmwasm-check Contract development cosmwasm-check on crates.io cosmwasm-check -h N/A

cosmwasm-storage is no longer maintained and has been dropped in favor of cw-storage-plus.

Overview

To get that contract to interact with a system needs many moving parts. To get oriented, here is a list of the various components of the CosmWasm ecosystem:

Standard library:

This code is compiled into Wasm bytecode as part of the smart contract.

  • cosmwasm-std - A crate in this workspace. Provides the bindings and all imports needed to build a smart contract.
  • cw-storage-plus - A crate which provides convenience helpers for interacting with storage with powerful types supporting composite primary keys, secondary indexes, automatic snapshotting, and more. This is used in most modern contracts.

Building contracts:

  • cosmwasm-template - A starter-pack to get you quickly building your custom contract compatible with the cosmwasm system.

  • cosmwasm-plus - Some sample contracts for use and inspiration. These provide usable primitives and interfaces for many use cases, such as fungible tokens, NFTs, multisigs, governance votes, staking derivatives, and more. Look in packages for docs on the various standard interfaces, and contracts for the implementations. Please submit your contract or interface via PR.

  • rust-optimizer - A docker image and scripts to take your Rust code and produce the smallest possible Wasm output, deterministically. This is designed both for preparing contracts for deployment as well as validating that a given deployed contract is based on some given source code, allowing a similar contract verification algorithm as Etherscan.

    Building locally instead of using the docker image can leak some information about the directory structure of your system and makes the build non-reproducible.

  • serde-json-wasm - A custom json library, forked from serde-json-core. This provides an interface similar to serde-json, but without any floating-point instructions (non-deterministic) and producing builds around 40% of the code size.

Executing contracts:

  • cosmwasm-vm - A crate in this workspace. Uses the wasmer engine to execute a given smart contract. Also contains code for gas metering, storing, and caching wasm artifacts.
  • wasmvm - High-level go bindings to all the power inside cosmwasm-vm. Easily allows you to upload, instantiate and execute contracts, making use of all the optimizations and caching available inside cosmwasm-vm.
  • wasmd - A basic Cosmos SDK app to host WebAssembly smart contracts. It can be run as is, or you can import the x/wasm module from it and use it in your blockchain. It is designed to be imported and customized for other blockchains, rather than forked.
  • cosmwasm-check - A CLI tool and a crate in this workspace. Used to verify a Wasm binary is a CosmWasm smart contract suitable for uploading to a blockchain with a given set of capabilities.

Creating a Smart Contract

You can see some examples of contracts under the contracts directory, which you can look at. They are simple and self-contained, primarily meant for testing purposes, but that also makes them easier to understand.

You can also look at cosmwasm-plus for examples and inspiration on more production-like contracts and also how we call one contract from another. If you are working on DeFi or Tokens, please look at the cw20, cw721 and/or cw1155 packages that define standard interfaces as analogues to some popular ERC designs. (cw20 is also inspired by erc777).

If you want to get started building you own contract, the simplest way is to go to the cosmwasm-template repository and follow the instructions. This will give you a simple contract along with tests, and a properly configured build environment. From there you can edit the code to add your desired logic and publish it as an independent repo.

We also recommend you review our documentation site which contains a few tutorials to guide you in building your first contracts. We also do public workshops on various topics about once a month. You can find past recordings under the "Videos" section, or join our Discord server to ask for help.

Minimum Supported Rust Version (MSRV)

See Minimum Supported Rust Version (MSRV).

API entry points

WebAssembly contracts are basically black boxes. They have no default entry points, and no access to the outside world by default. To make them useful, we need to add a few elements.

If you haven't worked with WebAssembly before, please read an overview on how to create imports and exports in general.

Exports

The required exports provided by the cosmwasm smart contract are:

// signal for 1.0 compatibility
extern "C" fn interface_version_8() -> () {}

// copy memory to/from host, so we can pass in/out Vec<u8>
extern "C" fn allocate(size: usize) -> u32;
extern "C" fn deallocate(pointer: u32);

// creates an initial state of a contract with a configuration send in the argument msg_ptr
extern "C" fn instantiate(env_ptr: u32, info_ptr: u32, msg_ptr: u32) -> u32;

Contracts may also implement one or more of the following to extend their functionality:

// modify the state of the contract
extern "C" fn execute(env_ptr: u32, info_ptr: u32, msg_ptr: u32) -> u32;

// query the state of the contract
extern "C" fn query(env_ptr: u32, msg_ptr: u32) -> u32;

// in-place contract migrations
extern "C" fn migrate(env_ptr: u32, msg_ptr: u32) -> u32;

// support submessage callbacks
extern "C" fn reply(env_ptr: u32, msg_ptr: u32) -> u32;

// expose privileged entry points to Cosmos SDK modules, not external accounts
extern "C" fn sudo(env_ptr: u32, msg_ptr: u32) -> u32;

// and to write an IBC application as a contract, implement these:
extern "C" fn ibc_channel_open(env_ptr: u32, msg_ptr: u32) -> u32;
extern "C" fn ibc_channel_connect(env_ptr: u32, msg_ptr: u32) -> u32;
extern "C" fn ibc_channel_close(env_ptr: u32, msg_ptr: u32) -> u32;
extern "C" fn ibc_packet_receive(env_ptr: u32, msg_ptr: u32) -> u32;
extern "C" fn ibc_packet_ack(env_ptr: u32, msg_ptr: u32) -> u32;
extern "C" fn ibc_packet_timeout(env_ptr: u32, msg_ptr: u32) -> u32;

allocate/deallocate allow the host to manage data within the Wasm VM. If you're using Rust, you can implement them by simply re-exporting them from cosmwasm::exports. instantiate, execute and query must be defined by your contract.

Imports

The imports provided to give the contract access to the environment are:

// This interface will compile into required Wasm imports.
// A complete documentation those functions is available in the VM that provides them:
// https://github.com/CosmWasm/cosmwasm/blob/v1.0.0-beta/packages/vm/src/instance.rs#L89-L206
extern "C" {
    fn db_read(key: u32) -> u32;
    fn db_write(key: u32, value: u32);
    fn db_remove(key: u32);

    // scan creates an iterator, which can be read by consecutive next() calls
    #[cfg(feature = "iterator")]
    fn db_scan(start_ptr: u32, end_ptr: u32, order: i32) -> u32;
    #[cfg(feature = "iterator")]
    fn db_next(iterator_id: u32) -> u32;

    fn addr_validate(source_ptr: u32) -> u32;
    fn addr_canonicalize(source_ptr: u32, destination_ptr: u32) -> u32;
    fn addr_humanize(source_ptr: u32, destination_ptr: u32) -> u32;

    /// Verifies message hashes against a signature with a public key, using the
    /// secp256k1 ECDSA parametrization.
    /// Returns 0 on verification success, 1 on verification failure, and values
    /// greater than 1 in case of error.
    fn secp256k1_verify(message_hash_ptr: u32, signature_ptr: u32, public_key_ptr: u32) -> u32;

    fn secp256k1_recover_pubkey(
        message_hash_ptr: u32,
        signature_ptr: u32,
        recovery_param: u32,
    ) -> u64;

    /// Verifies message hashes against a signature with a public key, using the
    /// secp256r1 ECDSA parametrization.
    /// Returns 0 on verification success, 1 on verification failure, and values
    /// greater than 1 in case of error.
    fn secp256r1_verify(message_hash_ptr: u32, signature_ptr: u32, public_key_ptr: u32) -> u32;

    fn secp256r1_recover_pubkey(
      message_hash_ptr: u32,
      signature_ptr: u32,
      recovery_param: u32,
    ) -> u64;

    /// Verifies a message against a signature with a public key, using the
    /// ed25519 EdDSA scheme.
    /// Returns 0 on verification success, 1 on verification failure, and values
    /// greater than 1 in case of error.
    fn ed25519_verify(message_ptr: u32, signature_ptr: u32, public_key_ptr: u32) -> u32;

    /// Verifies a batch of messages against a batch of signatures and public keys, using the
    /// ed25519 EdDSA scheme.
    /// Returns 0 on verification success, 1 on verification failure, and values
    /// greater than 1 in case of error.
    fn ed25519_batch_verify(messages_ptr: u32, signatures_ptr: u32, public_keys_ptr: u32) -> u32;

    /// Writes a debug message (UFT-8 encoded) to the host for debugging purposes.
    /// The host is free to log or process this in any way it considers appropriate.
    /// In production environments it is expected that those messages are discarded.
    fn debug(source_ptr: u32);

    /// Executes a query on the chain (import). Not to be confused with the
    /// query export, which queries the state of the contract.
    fn query_chain(request: u32) -> u32;
}

(from imports.rs)

You could actually implement a WebAssembly module in any language, and as long as you implement these functions, it will be interoperable, given the JSON data passed around is the proper format.

Note that these u32 pointers refer to Region instances, containing the offset and length of some Wasm memory, to allow for safe access between the caller and the contract:

/// Describes some data allocated in Wasm's linear memory.
/// A pointer to an instance of this can be returned over FFI boundaries.
///
/// This struct is crate internal since the cosmwasm-vm defines the same type independently.
#[repr(C)]
pub struct Region {
    /// The beginning of the region expressed as bytes from the beginning of the linear memory
    pub offset: u32,
    /// The number of bytes available in this region
    pub capacity: u32,
    /// The number of bytes used in this region
    pub length: u32,
}

(from memory.rs)

Implementing the Smart Contract

If you followed the instructions above, you should have a runable smart contract. You may notice that all of the Wasm exports are taken care of by lib.rs, which you shouldn't need to modify. What you need to do is simply look in contract.rs and implement instantiate and execute functions, defining your custom InstantiateMsg and ExecuteMsg structs for parsing your custom message types (as json):

#[entry_point]
pub fn instantiate(
  deps: DepsMut,
  env: Env,
  info: MessageInfo,
  msg: InstantiateMsg,
) -> Result<Response, ContractError> {}

#[entry_point]
pub fn execute(
  deps: DepsMut,
  env: Env,
  info: MessageInfo,
  msg: ExecuteMsg,
) -> Result<Response, ContractError> {}

#[entry_point]
pub fn query(deps: Deps, env: Env, msg: QueryMsg) -> Result<Binary, ContractError> {}

#[entry_point]
pub fn migrate(deps: DepsMut, env: Env, msg: MigrateMsg) -> Result<Response, ContractError> {}

The low-level db_read and db_write imports are nicely wrapped for you by a Storage implementation (which can be swapped out between real Wasm code and test code). This gives you a simple way to read and write data to a custom sub-database that this contract can safely write to as it wants. It's up to you to determine which data you want to store here:

/// Storage provides read and write access to a persistent storage.
/// If you only want to provide read access, provide `&Storage`
pub trait Storage {
    /// Returns None when key does not exist.
    /// Returns Some(Vec<u8>) when key exists.
    ///
    /// Note: Support for differentiating between a non-existent key and a key with empty value
    /// is not great yet and might not be possible in all backends. But we're trying to get there.
    fn get(&self, key: &[u8]) -> Option<Vec<u8>>;

    #[cfg(feature = "iterator")]
    /// Allows iteration over a set of key/value pairs, either forwards or backwards.
    ///
    /// The bound `start` is inclusive and `end` is exclusive.
    ///
    /// If `start` is lexicographically greater than or equal to `end`, an empty range is described, mo matter of the order.
    fn range<'a>(
        &'a self,
        start: Option<&[u8]>,
        end: Option<&[u8]>,
        order: Order,
    ) -> Box<dyn Iterator<Item = Record> + 'a>;

    fn set(&mut self, key: &[u8], value: &[u8]);

    /// Removes a database entry at `key`.
    ///
    /// The current interface does not allow to differentiate between a key that existed
    /// before and one that didn't exist. See https://github.com/CosmWasm/cosmwasm/issues/290
    fn remove(&mut self, key: &[u8]);
}

(from traits.rs)

Testing the Smart Contract (rust)

For quick unit tests and useful error messages, it is often helpful to compile the code using native build system and then test all code except for the extern "C" functions (which should just be small wrappers around the real logic).

If you have non-trivial logic in the contract, please write tests using rust's standard tooling. If you run cargo test, it will compile into native code using the debug profile, and you get the normal test environment you know and love. Notably, you can add plenty of requirements to [dev-dependencies] in Cargo.toml and they will be available for your testing joy. As long as they are only used in #[cfg(test)] blocks, they will never make it into the (release) Wasm builds and have no overhead on the production artifact.

Note that for tests, you can use the MockStorage implementation which gives a generic in-memory hashtable in order to quickly test your logic. You can see a simple example how to write a test in our sample contract.

Testing the Smart Contract (wasm)

You may also want to ensure the compiled contract interacts with the environment properly. To do so, you will want to create a canonical release build of the <contract>.wasm file and then write tests in with the same VM tooling we will use in production. This is a bit more complicated but we added some tools to help in cosmwasm-vm which can be added as a dev-dependency.

You will need to first compile the contract using cargo wasm, then load this file in the integration tests. Take a look at the sample tests to see how to do this... it is often quite easy to port a unit test to an integration test.

Production Builds

The above build process (cargo wasm) works well to produce wasm output for testing. However, it is quite large, around 1.5 MB likely, and not suitable for posting to the blockchain. Furthermore, it is very helpful if we have reproducible build step so others can prove the on-chain wasm code was generated from the published rust code.

For that, we have a separate repo, rust-optimizer that provides a docker image for building. For more info, look at rust-optimizer README, but the quickstart guide is:

docker run --rm -v "$(pwd)":/code \
  --mount type=volume,source="$(basename "$(pwd)")_cache",target=/target \
  --mount type=volume,source=registry_cache,target=/usr/local/cargo/registry \
  cosmwasm/optimizer:0.15.0

It will output a highly size-optimized build as contract.wasm in $CODE. With our example contract, the size went down to 126kB (from 1.6MB from cargo wasm). If we didn't use serde-json, this would be much smaller still...

Benchmarking

You may want to compare how long the contract takes to run inside the Wasm VM compared to in native rust code, especially for computationally intensive code, like hashing or signature verification.

TODO add instructions

Developing

The ultimate auto-updating guide to building this project is the CI configuration in .circleci/config.yml.

For manually building this repo locally during development, here are a few commands. They assume you use a stable Rust version by default and have a nightly toolchain installed as well.

Workspace

# Compile and lint
./devtools/check_workspace.sh

# Run tests
./devtools/test_workspace.sh

Contracts

Step Description Command
1 fast checks, rebuilds lock files ./devtools/check_contracts_fast.sh
2 medium fast checks ./devtools/check_contracts_medium.sh
3 slower checks ./devtools/check_contracts_full.sh

cosmwasm's People

Contributors

0xekez avatar alpe avatar amitpr avatar aumetra avatar chipshort avatar dariuszdepta avatar emidev98 avatar ethanfrey avatar grod220 avatar hashedone avatar jawoznia avatar jaybxyz avatar juggernaut09 avatar larry0x avatar llllllluc avatar loloicci avatar lukerhoads avatar maurolacy avatar mergify[bot] avatar mina86 avatar nik-suri avatar omahs avatar ponzininja avatar reuvenpo avatar shanev avatar srdtrk avatar ueco-jb avatar uint avatar webmaster128 avatar yihuang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cosmwasm's Issues

Pre-check wasm before compiling

Let's filter out some stuff before passing it to the backend:

  • valid wasm (magic number)
  • no floating point ops (wasm-parser can do that)
  • check signatures of imports/exports - validate before attempting to instantiate

others??

Cache Modules

Building on #22 and #24

When creating a new contract, don't just store wasm, but also store the precompiled module in a filesystem-backed cache (if supported by the backend). Since we need to compile anyway to ensure correctness, we can keep the result for later to only compile to native one time.

Ensure we use these cached modules when calling instantiate

Investigate wasm-bindgen

This is a well-maintained tool for building connections from rust/wasm to javascript. Much of this is not needed by us, but the build steps are well refined and it apparently does many optimizations. Let us see if this can be a compatible build step and if it provides a benefit.

Spike: Add (optional) higher level db methods

Currently get/from_slice and set/to_vec pairs capture everything we need with the db

If we add iterators #53 we may want some higher level logic to iterate over groups. Maybe this is as easy as implementing some standard rust trait, maybe we could encode some typical access patterns to make them easier

Add benchmarks to contracts

Both for pure rust (what we do in unit test), as well as calling them as compiled wasm through the vm (what we do in integration test).

Consider if we want to check multiple backends, or just pick one

Remove wasm-gc from the documented flow?

wasm-gc calls itself unnecessary in this note on their Github page:

Note: you probably don't need to use this project. This project is no
longer necessary to run by hand, nor do you need the wasm-gc executable
installed.

For a longer explanation, these two points mean that wasm-gc is likely no
longer a useful command to run for you:

  1. The Rust compiler now natively supports --gc-sections when linking wasm
    executables, which means wasm executables already have 90% of their garbage
    removed when coming out of the compiler.
  2. The wasm-pack (and wasm-bindgen) project will already run this by
    default for you, so there's no need to run it again.

Don't include this build! If you think you need to feel free to open an issue
on wasm-pack or wasm-bindgen, as it may be a bug in one of those projects!

Is there a good reason to keep wasm-gc as part of Building.md?

Simplify Integration testing

Much work was done to make the integration tests quite similar to unit tests, and it is rather easy to port them. One issue is the setup boilerplate that is copied to the top of each test case to instantiate the contracts. Here is a sample extract:

    let storage = MockStorage::new();
    let mut instance = Instance::from_code(&WASM, storage).unwrap();
    let msg = init_msg(500, 600, real_hash());
    let params = mock_params_height("creator", &coin("1000", "earth"), &[], 450, 550);
    let res = call_init(&mut instance, &params, &msg).unwrap().unwrap();

We can add a new function cosmwasm_vm::testing::instantiate to handle much of this boilerplate. It should allow us to run the following code:

    let msg = init_msg(500, 600, real_hash());
    let params = mock_params_height("creator", &coin("1000", "earth"), &[], 450, 550);
    let instance = testing::instantiate(&WASM, &params, &msg).unwrap();

I will take a few more looks at the existing integration tests to see if this can be further simplified (or eg, the handle calls can be simpler).

Crashes on panic/assert inside with_storage

The following code will trigger a crash:

instance.with_storage(|_store| {
    assert_eq!(1, 2);
});

Crash report looks like:

test instantiation::works ... ok
integration-958d6019ea3251d7(71261,0x70000f24d000) malloc: *** error for object 0x800000000ffff: pointer being freed was not allocated
integration-958d6019ea3251d7(71261,0x70000f24d000) malloc: *** set a breakpoint in malloc_error_break to debug
error: process didn't exit successfull

This is a sign of double-freeing a pointer. Must be unsafe code needing fixing. Got to make this safe.

Hackatom: Pull all generic functionality into top-level library

All generic functions in contracts/hackatom should move into src and be imported by the contract. This includes exports.rs, imports.rs, mock.rs and types.rs. Make sure all helpers needed in tests are not wrapped in #[cfg(test)] so they can be imported from other crate (much of this was done inside the integration test already).

Add a workspace lib/vm (cargo: cosmwasm-vm) that includes all helper code from the integration test that depends on wasmer. This can be included as a dev-dependency by any contract to allow for simple integration testing. It can also be used as a production building block for the golang integration.

Write Design Document for Queries

Note: resolving this in the docs repo, not here: CosmWasm/docs#20

The first pass on even exposing a custom query call on the contracts has lead to a few issues, notably https://github.com/confio/cosmwasm-examples/pull/10/files/c4d69b6c5c5dde327ce6c9a5e54af733e087b3b4#diff-120518ef3c48de5388b0219aa0f7ea31 and https://github.com/confio/cosmwasm/issues/71 Let us use the first learnings to design a proper query interface.

Here are my goals:

  • External clients as well as executing contracts should be able to (synchronously) query the state of any other contract running on the same chain.
  • Simple case is a raw query of one value, specified by contract address and key (to the contract-specific sub-store)
  • We should also allow (optionally) contracts to perform more complex queries that involve them running custom logic based on query.
  • One contract querying another one should not be able to trigger any reentrancy conditions.

Given those goals, here are some proposed designs:

  • Rather than instantiating a vm and executing code for a simple RawQuery, we can handle that in the surounding runtime
  • Custom queries should be defined on a per-module basis with a QueryMsg, just like HandleMsg.
  • The query function should be able to parse the QueryMsg and has read-only access to the contract-specific storage, but cannot modify storage, query other contract, or return Msgs to call other contracts - it must be self-contained and make no modifications to state.
  • The handle function should receive one more argument, a "query dispatcher", where it can send out query messages. I think init should have this as well (see below).
  • We should demo some good use-case of such queries in a contract to ensure the design makes sense. I could demo by integrating a name-service query in the escrow contract as part of the tutorial - allowing the escrow to resolve names to canonical addresses for the participants.

Example "query dispatcher":

pub trait QueryDispatcher {
  fn query(&self, query: Query) -> Result<QueryResponse>;
}

pub enum Query {
  Raw(RawQuery),
  Custom(CustomQuery),
}

pub struct RawQuery {
  pub contract_addr: &[u8],
  pub key: &[u8],
}

pub struct CustomQuery {
  pub contract_addr: &[u8],
  pub msg &[u8],
}

In this case, the Query can be parsed by the runtime and if it is Raw, then it will just directly access the contract-specific storage and return the raw values. If it is Custom, then msg is a json-encoded blob destined for the remote contract, and the external query function of the specified contract will be called with that data, to be parsed there in whatever format the contract choosed.

@webmaster128 I would love to hear your feedback here and design improvements.

Allow flexible data stores in cache.get_instance()

Currently we tie it to a MockStorage that is set fresh on each instantiate.

We need to use the KVStore / DB passed in to the library for go-cosmwasm. This (can) implement the Storage trait, we just need a way to override the implementation, and then expose that.

Measure default instance memory usage and optimize

I heard the default memory for instances was 32MB, which is far more than any contract I have written will need. The size of each one limits the number of contracts that can be held in memory via the Instance LRUCache.

It seems there is a fair bit of control of the memory regions that you can set up in an instance.

https://spectrum.chat/wasmer/runtime/share-memory-region-between-wasm-instances~3abfaa44-8fc8-4401-8590-1ec3bfbede5d

You can control min and max memory in the descriptor: https://docs.rs/wasmer-runtime/0.16.0/wasmer_runtime/wasm/struct.MemoryDescriptor.html

Measure the default memory usage (both of the wasm memory pages and the instance overhead (globals, wasm bytecode, etc). Then experiment on optimizing it, and experiment with the LRUCache. Definitely document this.

Instantiate contract by id

Make use of the storage #22 and call instantiate on a contract by id. This should be easy to expose in the go-cosmwasm api

We will likely have to change lib/vm so Instance handles the complete types (serialize Params, parse CosmosResult), as we use in the current tests. But add RawInstance that just handles bytes... as we need that to pass to go and do the parsing there.

Consider Second Language for Wasm contracts

Rust has great support for wasm and I do like it, but it also has a steep learning curve and may not be ideal to get most number of developers building on cosmwasm. Since the actual bindings are not so big, once they stabilize, I would be interested in adding support for writing contracts in another language. This would require both the basic libraries (what cosmwasm provides) as well as a tutorial. And ideally some bindings to do integration tests in the same language (ffi into cosmwasm-vm).

Time for investigation is now. Time for implementation is when cosmwasm api is stabilized (at or nearing a 1.0 release)

There is a nice list of wasm-suppporting languages. Many are "unstable but usable". I would prefer to take one with stable wasm support, but this really ends up with C/C++, Rust, AssemblyScript or Go if we stick to mainstream languages with stable support. We may also consider some nice ones with unstable support.


Stable

AssemblyScript A subset of typescript, meant to introduce web developers to wasm. Near Protocol is using this as the main language for their wasm smart contracts (along side rust), chosen for ease-of-use by devs.

Go There is a custom llvm backend for go, branded "tinygo" which allows you to compile go programs to wasm. While go is more verbose than the other languages on this list, it is also known to cosmos SDK developers.


Unstable

Kotlin Nice ergonomic language, known by many android developers. Kotlin/Native adds wasm32 support, but it is not 1.0 yet and seems like progress is slowing down (from forum comment)

Swift a very nice language, used by iOS developers everywhere, but wasm support is just starting:

Add stateless/pure functions

Requested by @aaronc

The current flow is well defined for Modules / Handlers.

  1. Upload code, identified by wasm hash
  2. Create an instance with a custom address, storage and account balance
  3. Execute methods on this handler, which may modify the state OR
    3b. Query this handler, which may read the state.

There is also the case for uploading some stateless "decision functions", such as evaluating if a token transfer is allowed, without requiring an entire, custom token module. Or taking on chain data and performing a verification calculation (for ecological state contracts).

This could be represented as a different interface, exposed differently in both the wasmd module, as well as a different build system and interface in wasm. It would only have access to the message (and possibly params like block.{height,time}). No ability to access state directly or indirectly, no ability to dispatch state-changing messages.

  1. Upload function code
  2. Execute function (all data is passed as arguments)

If it needs to work on data from the KVStore, the calling module must load that data and pass it in with the arguments.

Add CI support

Add support for CircleCI and enable it, both for top-level library, and any contract.
Contracts will need to do: cargo wasm, wasm-gc, cargo test to ensure integration test has proper wasm and the full stack works (unit and integration).

Investigate Using Cap'n Proto for serializatrion

Requested by @aaronc

The idea is to use Cap'n Proto as a serialization codec. It is fast. But also requires schemas to decode, nothing as nice for devs as easily readable json in the messages.

I could see this first used to pass Params between the cosmos SDK and the wasm module, as that is never exposed to the end user, rather used internally. If this shows it works, Individual contracts could decide to use either json or cap'n Proto for their storage and message formats (which are just opaque bytes to every level between client and server). It would make it a bit harder to call between contracts with different serialization, but in the end only require another import for the codec.

Add (optional?) context to Parse/Serialize

These errors may be returned multiple places in the course of execution, and it is unclear which Parse failed. Since it seems to be impossible to activate backtraces in wasm builds, even in debug mode, we should allow some context.

I propose adding to at least these two snafu Errors, the following:

  context: &'static str
  // or
  context: Option<&'static str>

This let's us then add more info to the ParseErr:

    let msg: HandleMsg = from_slice(&msg).context(ParseErr {context: "HandleMsg"})?;
    // or
    let params: Params = from_slice(&params).context(ParseErr { context: Some("Params")})?;
    let msg: HandleMsg = from_slice(&msg).context(ParseErr {context: None})?;

Since this is a breaking change in ParseErr, and we likely want to use this everywhere, it makes sense to just add a reference to a static string.

Cache Instances

Building on #25

Upon instantiate or handle, we must create an in-memory instance from the cached, pre-compiled module. We can keep this instance in memory and reuse if the same smart contract is called again shortly in the future.

  • Implement a basic LRU cache for last eg. 10 instances
  • Combine module and instance cache (and wasm code) in one "cache" struct that has high level methods for working for it.
  • Ensure this is generic enough to support multiple backends (at least swapping them via feature flags)

Add singlepass backend, gas metering

Since default cranelift doesn't support gas metering, add a feature flag for backends, and add support for single-pass, which does gas metering (but not serialization).

Ensure integration tests work (with proper gas prices enforced).

Remove RawQuery support from cosmwasm

This will no longer be used, only smart queries.
Let's remove the references from cosmwasm/src

Also

pub struct QueryResponse {
    pub results: Vec<Model>,
}

doesn't make much more sense, we should allow any Vec<u8> returned from the "smart query". It is a generic "json in, json out" interface, we cannot impose an order. Only order is in "raw queries" implemented in the sdk module

Add Transaction/Rollback semantics to MockStorage

The Storage interface is by design simple, to allow us to have minimal requirements on the hosting system running the contracts. However, we do assume that any changes to state will be rolled-back by the environment when handle fails. This is done in eg. wasmd. However, there is no documented way to do this in unit tests.

The outcome of this issue is to add required functionality (if any), and document best practices.

Since MockStorage implements Clone, at least in non-integration tests, we can do the following.

    let mut store = MockStorage::new();
    let mut cache = store.clone();
    let res = handle(&mut cache, params, msg);
    if !res.is_err() {
        store = cache;
    }

I am not sure how to do the above cleanly on integration tests. Ideally there would be one API to allow both, just as there is a similarly names cosmwasm_vm::testing::handle function that looks like handle but takes a VM instance instead of a store.

Maybe some API like:

let mut store = MockStorage::new();
let res = handle_or_rollback(&mut store, params, msg);

Best I can mock-up right now is, however:

pub fn handle_or_rollback<T: Storage + Clone>(store: T, params: Params, msg: Vec<u8>) -> (T, Result<Response>) {
    let mut cache = store.clone();
    let res = handle(&mut cache, params, msg);
    if res.is_ok() {
        (cache, res)
    } else {
        (store, res)
    }
}

// usage:
let store = MockStorage::new();
let (store, res) = handle_or_rollabck(store, params, msg);

Defined "Canonical Address" and user-facing formats

So @webmaster128 convinced me that using the default human-readable address string for the blockchain is not a good identifier in the contracts, as this may change. Some chains have migrated from hex to bech32, and a hard-fork may change the bech32 prefix. If we depend on this representation as a user identifier in eg an erc20 contract, such a change may cause everyone to lose access to their tokens. Definitely not ideal.

Underlying every chain does use some binary representation for the address. Typically, but not always 20 bytes. The length of this data and the algorithm from pubkey -> address should not change during the life of the blockchain. However, the user will want to submit transactions using the native string, not always the base64 encoding of the raw address bytes. The readability of the json InitMsg / HandleMsg is very important for good developer UX.

Here are some ideas on canonical addresses:

  • Define a "canonical address" which is &[u8] and guaranteed to be the same size for the life of one chain (eg 20 or 32 bytes).
  • In params.message.signer and params.contract.address, we will use this "canonical address" instead of a string
  • If needed, we can add a value to params with the blockchain-specific "canonical address length" but my feeling is it is unnecessary, as the info is available as params.contract.address.len().
  • All storage keys should use a canonical address to identify the account. (See #77)
  • Cross-contract queries use the canonical address to identify the target contract. (See #78)

Here are ideas on the use of addresses in HandleMsg / InitMsg:

  • We should accept standard blockchain-specific string encoding and the contract should convert to "canonical address" after parsing the json
  • Ideally the contract code is not tied to one chain, if any parameterization is needed, such as bech32 prefix, this should be present in Params (eg. optional bech32 prefix, if native encoding is hex, etc). Note that this may change during the lifetime of a contract, but not during a transaction.
  • We should provide a standard function in cosmwasm to take the metadata from Params, and a human provided string from the json msg and return Result<CanonicalAddress>.

We should also allow contracts to override the string -> canonical resolution mechanism. For example, after implementing the name service demo (this is the standard cosmos sdk tutorial that I was requested to port to cosmwasm), we can update the escrow contract to make it name service aware. That is is InitMsg includes :bob, it will see the leading : and send the bob as a query to the name service contract (address of name service contract set in initial config). If it doesn't start with : then it uses standard resolution. After resolution, we always end up with an CanonicalAddress (or error) and use this standard bytes representation for all internal work.

Given this storage, we will need to also use some resolution logic one queries. Either client side for raw queries, but we could also make a custom query, such as ShowTokenAllowance{granter: String, grantee: String}. This is tied into #72 and actually makes me wonder if we should allow queries to then query again, or only would that be too risky.

Support arbitrary keys in RawQuery/QueryResponse

Right now, RawQuery expresses the database key as a string.

// RawQuery is a default query that can easily be supported by all contracts
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, JsonSchema)]
pub struct RawQuery {
    pub key: String,
}

This limited the use of RawQuery to keys which are valid UTF-8.

Both the ERC20 token contract as well as the following function signature expect support for arbitrary binary keys: pub fn raw_query(key: &[u8]) -> Result<Vec<u8>> {.

Using a hex or base64 encoded binary string would allow arbitrary key queries.

Add DB.Iterator to callbacks

  • Ability to create an iterator over a range of items.
  • Both forward and reverse
  • Return (key: &[u8], value: &[u8]) or such
  • Add helpers for prefix scan
  • Update MockStorage
  • Update go-cosmwasm callbacks

Generate ABI when compiling contract

There is a clear ABI for the contract, based on JSON format of InitMsg and HandleMsg.

Add a build step to generate a dev-readable document alongside the compressed wasm, so we can easily target it with client code. Most likely json-schema or some variant.

Some starting points I found (suggest others):

JSON Schema validation:

This looks promising - macros to create schema from code:

Reverse direction: JSON Schema -> rust/serde:

Also interesting, typescript Definitions from rust/serde:

Allow (optional) upgradeable contracts

Requested by @aaronc

"I would like to add the ability to upgrade the code behind a contract. This would simply be adding a maintainer sdk.Address on the contract. It would also be nice if this included support for a migrate function exported from the contract. With this contracts could be self-contained modules maintained by something other than validator governance. One use case is that we would like to have a contract for issuing a credit from claims. The maintainer would be the credit designer which is likely some trusted third party or a DAO. Being able to upgrade prevents ethereum-like smart contract failure cases"

Optionally use DIDs as Human Addresses

This is a sketch of what would be possible. Please add comments to help develop it

Let's keep imagining what is possible with Human Names, once we develop a solution to the name service issue. We could not just use a reference to resolve a user address, but resolve a contract as well. Maybe we could dispatch a message to an "ERC20" token contract not by it's name, but by it's uniquely registered token ticker. We would soon need to use some way to distinguish the scope or context of a name. This is where Decentralized Identifiers (DIDs) could come in. Imagine the following message format, that could be used either by a end-client or by a smart contract "actor":

{
    "destination": "did:token:XRN",
    "msg": {
        "transfer": {
            "from": "did:account:alice",
            "to": "did:account:bob",
            "amount": "13.56"
        }
    }
}

Each blockchain would need to expose a consistent way to resolve them to canonical addresses (that can be used by all contracts). And we could further see their use when referencing objects on remote chains, to be sent over IBC, leveraging automatic name resolution on the recipient chain as a form of auto-discovery (I don't need know what the address of the XRN contract is on the other chain, just that it is registered as the real XRN).

Cannot compile contract with rust stable (cosmwasm master)

After upgrading cosmwasm and cosmwasm-vm in my contract with

diff --git a/erc20/Cargo.lock b/erc20/Cargo.lock
index 976eba7..a6f2ed3 100644
--- a/erc20/Cargo.lock
+++ b/erc20/Cargo.lock
@@ -161,8 +161,8 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
 
 [[package]]
 name = "cosmwasm"
-version = "0.5.1"
-source = "registry+https://github.com/rust-lang/crates.io-index"
+version = "0.5.2"
+source = "git+https://github.com/confio/cosmwasm?rev=a7e02c8b77f472906bb2b92c2c0406c7b01b751f#a7e02c8b77f472906bb2b92c2c0406c7b01b751f"
 dependencies = [
  "schemars 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
  "serde 1.0.103 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -172,10 +172,10 @@ dependencies = [
 
 [[package]]
 name = "cosmwasm-vm"
-version = "0.5.1"
-source = "registry+https://github.com/rust-lang/crates.io-index"
+version = "0.5.2"
+source = "git+https://github.com/confio/cosmwasm?rev=a7e02c8b77f472906bb2b92c2c0406c7b01b751f#a7e02c8b77f472906bb2b92c2c0406c7b01b751f"
 dependencies = [
- "cosmwasm 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
+ "cosmwasm 0.5.2 (git+https://github.com/confio/cosmwasm?rev=a7e02c8b77f472906bb2b92c2c0406c7b01b751f)",
  "hex 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
  "lru 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
  "memmap 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -327,8 +327,8 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
 name = "erc20"
 version = "0.1.0"
 dependencies = [
- "cosmwasm 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
- "cosmwasm-vm 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
+ "cosmwasm 0.5.2 (git+https://github.com/confio/cosmwasm?rev=a7e02c8b77f472906bb2b92c2c0406c7b01b751f)",
+ "cosmwasm-vm 0.5.2 (git+https://github.com/confio/cosmwasm?rev=a7e02c8b77f472906bb2b92c2c0406c7b01b751f)",
  "hex 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
  "schemars 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
  "serde 1.0.103 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -1157,8 +1157,8 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
 "checksum const-random 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)" = "7b641a8c9867e341f3295564203b1c250eb8ce6cb6126e007941f78c4d2ed7fe"
 "checksum const-random-macro 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c750ec12b83377637110d5a57f5ae08e895b06c4b16e2bdbf1a94ef717428c59"
 "checksum constant_time_eq 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "995a44c877f9212528ccc74b21a232f66ad69001e40ede5bcee2ac9ef2657120"
-"checksum cosmwasm 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "421795f63fb6b5b176ff9c0147d04eab4b450b3af3b9f058f9d5ee4acaff2c2b"
-"checksum cosmwasm-vm 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "82682509b316b6d033b240fcbdbc647c39c2ab85453a2bea8765f2e4187a430c"
+"checksum cosmwasm 0.5.2 (git+https://github.com/confio/cosmwasm?rev=a7e02c8b77f472906bb2b92c2c0406c7b01b751f)" = "<none>"
+"checksum cosmwasm-vm 0.5.2 (git+https://github.com/confio/cosmwasm?rev=a7e02c8b77f472906bb2b92c2c0406c7b01b751f)" = "<none>"
 "checksum cranelift-bforest 0.44.0 (registry+https://github.com/rust-lang/crates.io-index)" = "fff04f4ad82c9704a22e753c6268cc6a89add76f094b837cefbba1c665411451"
 "checksum cranelift-codegen 0.44.0 (registry+https://github.com/rust-lang/crates.io-index)" = "6ff4a221ec1b95df4b1d20a99fec4fe92a28bebf3a815f2eca72b26f9a627485"
 "checksum cranelift-codegen-meta 0.44.0 (registry+https://github.com/rust-lang/crates.io-index)" = "dd47f665e2ee8f177b97d1f5ce2bd70f54d3b793abb26d92942bfaa4a381fe9f"
diff --git a/erc20/Cargo.toml b/erc20/Cargo.toml
index d1fd543..96578e1 100644
--- a/erc20/Cargo.toml
+++ b/erc20/Cargo.toml
@@ -28,7 +28,7 @@ cranelift = [ "cosmwasm-vm/default-cranelift"]
 singlepass = [ "cosmwasm-vm/default-singlepass"]
 
 [dependencies]
-cosmwasm = { version = "0.5.1" }
+cosmwasm = { git = "https://github.com/confio/cosmwasm", rev = "a7e02c8b77f472906bb2b92c2c0406c7b01b751f" }
 schemars = "0.5"
 serde = { version = "1.0.60", default-features = false, features = ["derive"] }
 snafu = { version = "0.5.0", default-features = false, features = ["rust_1_30"] }
@@ -38,5 +38,5 @@ hex = "0.4.0"
 wasm-bindgen = "0.2"
 
 [dev-dependencies]
-cosmwasm-vm = { version = "0.5.1", default-features = false }
+cosmwasm-vm = { git = "https://github.com/confio/cosmwasm", rev = "a7e02c8b77f472906bb2b92c2c0406c7b01b751f" }
 serde_json = "1.0"

and adapting my code according to the API changes,

  1. cargo build works
  2. cargo wasm works
  3. cargo unit-test fails to compile with
$ cargo unit-test
   Compiling dynasm v0.5.2
   Compiling wasmer-clif-fork-frontend v0.44.0
   Compiling cranelift-native v0.44.0
   Compiling wasmer-clif-fork-wasm v0.44.0
error[E0554]: `#![feature]` may not be used on the stable release channel
 --> /myhome/.cargo/registry/src/github.com-1ecc6299db9ec823/dynasm-0.5.2/src/lib.rs:1:1
  |
1 | #![feature(proc_macro_diagnostic)]
  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error[E0554]: `#![feature]` may not be used on the stable release channel
 --> /myhome/.cargo/registry/src/github.com-1ecc6299db9ec823/dynasm-0.5.2/src/lib.rs:2:1
  |
2 | #![feature(proc_macro_span)]
  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

   Compiling wasmer-clif-backend v0.11.0
   Compiling wasmer-runtime v0.11.0
error: aborting due to 2 previous errors

For more information about this error, try `rustc --explain E0554`.
error: could not compile `dynasm`.

where dynasm is known to work on rust nightly only.

There are two things I wonder:

  1. Why does this error occur now, even if my dynasm and wasmer-singlepass-backend entries in erc20/Cargo.lock did not change.
  2. Crate cosmwasm_vm is only used for integration tests. Why does the command cargo unit-test even need to compile it?

Hackatom: Finalize integration test and refactor

Building on #1 and #2 we can produce a clean integration test that handles both init and send, and provides some good abstractions on top of all accesses to the instance. Spend time pulling these helper methods into their own module and making the vm access in the integration test clean. This is the same code we will need to call into the vm in production - it should be reusable (outside the test itself).

Add wasm storage

Implement support for create and get_code in go-cosmwasm.

We should minimally:

  • Save a wasm file locally
  • Return a contract_id
  • Load wasm based on contract_id

Add support for LLVM backend

Once we have a feature flag, we can support LLVM with a switch as well... even if unlikely to use for permissionless uploading, this can be valuable for governance approved "precompiles"

Avoid JIT bombs - investigation wasm compile

Try to run singlepass or cranelift compiler inside wasm...

  • The wasm -> native code compilation step can run in wasm.
  • Must separate out target we compile wasm to (x86) from target we compile the rust to (wasm)
  • Check if this can work - how much work to do it - rough performance

Derive common helpers for common types

For cosmwasm::types, we can #[derive(Clone,Debug,Default)] and maybe more.

I had avoided them to limit code size, but they can be helpful in test cases, and I discovered if they are not used in production code paths, they do not affect the final wasm code size (llvm is smart). Let's make devs life's easier and help testing.

(I already found cases where this would have saved a few lines in tests code, and I had to do longer work-arounds to not use them).

Hackatom: Improve memory management API

We expose allocate and deallocate, which return pointers. This is good to allocate memory so we can set arguments. And we can take a pointer as result and load in a string result. However, there are two issues here:

  1. Minor: we don't include length, so require 0 byte terminated C strings, copying one byte at a time. This causes "nice" crashes when we forget to explicitly add this null byte to some rust data.
  2. Major: If the guest (wasm) calls out to the host (via imports - c_read), it is very hard to call back into the guest. We can now use function tables but it is unclear how to get the index in the first place. Ideally, we do not need to call allocate inside the host function.

Solution:

  1. Don't pass a ptr to memory, but ptr to a Slice (offset, len). Both as a return value from alloc, as well as argument to function calls and free. This lets us clearly specify a size and avoid first problem.
  2. Wasm should never expect a callback to be able to allocate memory. It should pass a pre-allocated buffer to the function where it expects it to be written, the function can return the length written. This is much like old-school C style, which works with very similar constraints as we have. This avoids the callback issue, and also the length guarantess the host function will never write data outside of the pre-allocated buffer.

This describes the general idea the slice structure, intended for use over rust-ffi

Remove failure crate, at least in wasm compile

It is easy to use and adds backtraces, which is great for rapid development and debugging.

However, we can start stabilizing the error cases (needing less rapid development) and we do not need backtraces in wasm code (they are not available).

This crate comes with quite the code size overhead. Let's try to make clearly defined errors (as enum) and ideally allow backtraces in test more but not production.

Bonus: clearer enumeration of failure mode, and reduced wasm bytecode size

Validate Wasm before storing

Do some checks on create #22

  • Compile it and try to link / otherwise verify imports
  • Use wasm-parser to evaluate determinism (no floats) and imports?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.