GithubHelp home page GithubHelp logo

paritytech / smoldot Goto Github PK

View Code? Open in Web Editor NEW
303.0 16.0 75.0 105.76 MB

Alternative client for Substrate-based chains.

License: GNU General Public License v3.0

Rust 96.34% Dockerfile 0.01% JavaScript 0.66% TypeScript 2.97% Shell 0.01%
polkadot rust client substrate

smoldot's Introduction

DEPRECATED

This repository is deprecated in favor of: https://github.com/smol-dot/smoldot

Lightweight Substrate and Polkadot client.

Introduction

smoldot is an alternative client of Substrate-based chains, including Polkadot.

There exists two clients: the full client and the wasm light node. The full client is currently a work in progress and doesn't support many features that the official client supports.

The main development focus is currently around the wasm light node. Using https://github.com/polkadot-js/api/ and https://github.com/paritytech/substrate-connect/ (which uses smoldot as an implementation detail), one can easily connect to a chain and interact in a fully trust-less way with it, from JavaScript.

The Wasm light node is published:

Objectives

There exists multiple objectives behind this repository:

  • Write a client implementation that is as comprehensive as possible, to make it easier to understand the various components of a Substrate/Polkadot client. A large emphasis is put on documentation.
  • Implement a client that is lighter than Substrate, in terms of memory consumption, number of threads, and code size, in order to compile it to WebAssembly and distribute it in web pages.
  • Experiment with a new code architecture, to maybe upstream some components to Substrate and Polkadot.

Trade-offs

In order to simplify the code, two main design decisions have been made compared to Substrate:

  • No native runtime. The execution time of the wasmtime library is satisfying enough that having a native runtime isn't critical anymore.

  • No pluggable architecture. smoldot supports a certain hard coded list of consensus algorithms, at the moment Babe, Aura, and GrandPa. Support for other algorithms can only be added by modifying the code of smoldot, and it is not possible to plug a custom algorithm from outside.

Building manually

Wasm light node

In order to run the wasm light node, you must have installed rustup.

The wasm light node can be tested with cd bin/wasm-node/javascript and npm install; npm start. This will compile the smoldot wasm light node and start a WebSocket server capable of answering JSON-RPC requests. This demo will print a list of URLs that you can navigate to in order to connect to a certain chain. For example you can navigate to https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944%2Fwestend in order to interact with the Westend chain.

Note: The npm start command starts a small JavaScript shim, on top of the wasm light node, that hard codes the chain to Westend and starts the WebSocket server. The wasm light node itself can connect to a variety of different chains (not only Westend) and doesn't start any server.

Full client

The full client is a binary similar to the official Polkadot client, and can be tested with cargo run.

Note: The Cargo.toml contains a section [profile.dev] opt-level = 2, and as such cargo run alone should give performances close to the ones in release mode.

The following list is a best-effort list of packages that must be available on the system in order to compile the full node:

  • clang or gcc
  • pkg-config
  • sqlite

smoldot's People

Contributors

ank4n avatar arrudagates avatar bernardoaraujor avatar bkchr avatar dependabot-preview[bot] avatar dependabot[bot] avatar expenses avatar fanyang1988 avatar florianfranzen avatar gilescope avatar josepot avatar melekes avatar nukemandan avatar rakanalh avatar raoulmillais avatar sabify avatar tomaka avatar tripleight avatar wirednkod avatar xlc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

smoldot's Issues

During syncing I got this one

trapped in vm: wasm trap: unreachable                                                                                                                                                               ] #1932659 (🌐 121)
wasm backtrace:
  0: 0x2148 - <unknown>!rust_begin_unwind
  1: 0x1db5 - <unknown>!core::panicking::panic_fmt::h121ecdeb7ea416df
  2: 0x1428 - <unknown>!core::panicking::panic::h006470e608bed950
  3: 0x9df59 - <unknown>!Core_execute_block

thread 'tasks-pool-5' panicked at 'not yet implemented: verify err: Unsealed(Trapped { logs: "Hash not equale8c3f5c0203ef7f6750ced0c383b707212f54ace9a4a630b9f7021bb8dbd59f12963dc175007c7238b6128695d8ccedd33eb346ba0025109a58c2759f869fc0dpanicked at \'Storage root must match that calculated.\', /rustc/2454a68cfbb63aa7b8e09fe05114d5f98b2f9740/src/libcore/macros/mod.rs:10:9" })', /Users/pepyakin/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/src/rust/src/libstd/macros.rs:16:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
[1]    32579 abort      ./target/release/full-node

fwiw my build is 3ac550e but with tomaka/corooteen#1 included

`network/libp2p.rs` is full of race conditions

The entire file should be reviewed, possibly re-written.

Deadlocks and race conditions frequently happen. There is also a futures-cancellation problem with next_event() that warrants a refactor.

Implement Grandpa warp syncing

The sending side of Grandpa warp sync is paritytech/substrate#1208
This issue consists in implementing the "receiving" side, in other words sending a request, downloading the data, and syncing the chain.

There are two things to do:

  1. Add the system that analyzes the Grandpa warp sync payload and advances the chain.
  2. Add the networking code that lets you download that Grandpa warp sync payload.

Advancing the chain

The finalized state of the chain is represented in the code by a chain::chain_information::ChainInformation.
As a high level overview, advancing the chain would consist in updating this ChainInformation. In other words, some system that takes as input a ChainInformation and a warp sync payload, and outputs a new ChainInformation.

For logging purposes, it is important that we don't verify the entire warp sync payload at once, but step by step (one header and justification at a time).

As such, the API should look like:

pub struct Config {
    pub start: ChainInformation,
    pub warp_sync_payload: Vec<WarpSyncEntry>,
}

pub struct WarpSyncEntry {
    pub scale_encoded_header: Vec<u8>,  // please stick to Vec<u8>, no stronger typing
    pub scale_encoded_justification: Vec<u8>,  // same
}

pub enum Verification {
    Done(ChainInformation),
    InProgress(Verifier),
}

pub struct Verifier {
}

impl Verifier {
   pub fn new(config: Config) -> Self { ... }

   pub fn next(self) -> Verification { ... }

   pub fn state(&self) -> ChainInformationRef { ... }    // `Ref` as it's a reference to a ChainInformation
}

This can be put in a new module chain::warp_sync.

As for the implementation, it could use the NonFinalizedTree type found in blocks_tree.rs.
Unfortunately, the NonFinalizedTree requires you to pass all headers, and doesn't support skipping headers yet. It's unclear to me how to implement this, because one of the important step when advancing the finalized block is scanning all the headers in between the previous and new finalized blocks in order to find the new values to put in the updated ChainInformation.

It is unclear how to implement .

Adding the networking code

The code that decodes the payload should be put in protocol.rs. Similar existing code can already be found there for block requests and storage proof requests.

In the same vain, a function to send out a warp sync request should be added to src/network/service.rs and full_node/src/network_service.rs, similar to the existing blocks_request and storage_proof_request.

Implement head-of-chain syncing

This todo must be tackled.

This should be done by duplicating optimistic.rs and modifying its logic in order to only download and sync individual blocks, similar to what sync.rs is doing in Substrate (sc_network::protocol::sync).

This is quite complicated, and I'm personally not familiar with all the details of how it works in Substrate nor of the challenges encountered.

One requirement is that all forks must be downloaded in order for finality to work. For example, if the best block of the local node is 5, and a node announces a block 4 that isn't the same as the block 4 known locally, this newly-announced block should be downloaded as well, because it's possible for validators to end up voting for finalizing this newly-announced block.

Rename repository?

There's some ambiguity between substrate --light and substrate-lite when pronounced.
It wouldn't be a bad idea to find a new name. People have suggested smolstrate, but I'm open to ideas.

unwrap in some future leads to a panic

Running this,

RUST_BACKTRACE=1 ./target/release/full-node

built from e53148c

I am getting

thread 'tasks-pool-1' panicked at 'called `Option::unwrap()` on a `None` value', /Users/pepyakin/dev/parity/substrate-lite/src/network/worker.rs:434:66
stack backtrace:
   0: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
   1: core::fmt::write
   2: std::io::Write::write_fmt
   3: std::panicking::default_hook::{{closure}}
   4: std::panicking::rust_panic_with_hook
   5: rust_begin_unwind
   6: core::panicking::panic_fmt
   7: core::panicking::panic
   8: futures_util::future::future::FutureExt::poll_unpin
   9: full_node::start_network::{{closure}}::{{closure}}::{{closure}}::{{closure}}
  10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
[1]    57829 abort      RUST_BACKTRACE=1 ./target/release/full-node

Sync state machine should report when the runtime needs to be compiled

If this None is reached, we should instead report to the user the start of a runtime compilation. When it's over, report the end of a runtime compilation. This way, it will be possible on a higher level to measure the time spent compiling runtimes.

The linked code is very work-in-progress and will likely change in the future. This issue exists in order to not forget.

Integrate Jaeger

One ambition of substrate-lite would be to connect to a Jaeger server and send telemetry information to it.

What we could send is:

  • For each block:

    • A span indicating when the block has been authored, with a span per transaction applied and a span per storage access.
    • A span indicating when the block announces have been sent out.
    • A span indicating the incoming block requests, subdivided into sub-spans about the various stages of the block request.
    • For each node, a span indicating the block verification process, with a span per storage access.
    • For each node, when it gets written to the database.
  • For each network connection:

    • A span when incoming data is processed or outgoing data is sent out.
  • For each transaction: (I'm not familiar enough with transactions)

    • For each node, a span when its announces are sent out.
    • For each node, a span when its announce is received.
    • For each node, a span when it gets verified.
    • For each node, a span when it gets added to the transaction pool.
    • For each node, a span when it gets added to a block being authored.
    • For each node, a span when it gets removed from the transaction pool.

Now in order to have nice visualizations, the span IDs and trace IDs sent to Jaeger should be the same between the multiple nodes. Of course we don't want them to share IDs over the network like you would normally do in a traditional Jaeger setup.

However we can generate deterministic IDs based on for example the block hash for everything block-related and the PeerIds for everything connection-related.

If this is done right, it should be possible to see in a nice way a graph of the various steps of the propagation of a block or transaction, and how long each step took.

OptimisticSync blocks reset likely buggy

I just had this situation where the OptimisticSync had to reset to the latest finalized chain. After the reset, it tried to re-import and re-verify blocks, but the blocks were then failing to execute.

Improve the informant

Some ideas:

  • Use the and characters for the progress bar, instead of = and .
  • Add an "opening database" mode.
  • Add a "warp syncing" mode.
  • Add a "fully synced" mode showing other information, but it's unclear which.

Consider letting consumers of smoldot wasm node choose what to do with println! messages

As a consumer of the wasm client what I want to do with these messages will vary. E.g. If my project uses a logging framework I probably want to send them there instead of directly to the console.

This can be achieved by adding a new logging callback to the config given to start and then calling that here:
https://github.com/paritytech/smoldot/blob/main/bin/wasm-node/javascript/index.js#L184-L189 if
instead of calling console.log.

Happy to submit a PR for this if you think it's a good idea.

Add a keystore

I'm opening this issue mostly to keep notes.

As it took me a while to figure this out, the cryptographic code of Substrate that creates //Alice is found mostly here:

The derive function takes as parameter the path (e.g. Alice) and a seed. The seed is hardcoded for the publicly-known keys such as //Alice.

The sr25519 secret key of //Alice as obtained by calling schnorrkel::SecretKey::to_bytes() is:

[51, 166, 243, 9, 63, 21, 138, 113, 9, 246, 121, 65, 11, 239, 26, 12, 84, 22, 129, 69, 224, 206, 203, 77, 240, 6, 193, 194, 255, 251, 31, 9, 146, 90, 34, 93, 151, 170, 0, 104, 45, 106, 89, 185, 91, 24, 120, 12, 16, 215, 3, 35, 54, 232, 143, 52, 66, 180, 35, 97, 244, 166, 96, 17]

The ed25519 private key of //Alice is:

[136, 220, 52, 23, 213, 5, 142, 196, 180, 80, 62, 12, 18, 234, 26, 10, 137, 190, 32, 15, 233, 137, 34, 66, 61, 67, 52, 1, 79, 166, 176, 238]

Properly implement storage subscription for the wasm-node

At the moment, when a JSON-RPC subscription request is made, the node sends back the current value, but doesn't send any update on the value.

This should be implemented by making a storage proof request at each block for each item we're subscribed to.

This isn't properly implementable at the moment, as it would be unwise to send requests for each block when we're receiving ~1000 blocks/sec.
As such, this is waiting for #270 and #271.

Polkadot: Storage root mismatch at block 1739174

A bit above block 1738766 At block 1739174, there is a storage root mismatch:

Unsealed(WasmVm(WasmVm { error: Trapped, logs: "Hash not equal346cc244eb87ac1241ee3ece6546883555bbadddfe03384988a2757d5a71d7188a15dcb989c62b3d9cf0fd166bd8f15c15b7eca2ce3dbb95b8539c7bbe6a470fpanicked at 'Storage root must match that calculated.', /rustc/d7f94516345a36ddfcd68cbdf1df835d356795c3/src/libcore/macros/mod.rs:10:9" }))

Setup logging

While logging is forbidden in the library code (i.e. anything in src), we can and should set it up in the actual binaries.

For the full node, I'd go with slog, as the full node has several tasks running in parallel, and slog will make it possible to filter logs by tasks. Plus it's got a variety of backends, which are very useful.

For the browser node, everything is single threaded, and good ol' log with the console_log crate should be enough.

On runtime upgrade, the `verify` module should also compile the new runtime

At the moment, a runtime can in theory put anything in the :code and :heappages keys and succeed verification.
The upper layer will then try to compile the new runtime, which would fail:

https://github.com/paritytech/substrate-lite/blob/2c7cf0afdd9ce99ad88b97130ebafdeaa9a6d7f9/src/chain/sync/optimistic.rs#L782

This failure is hard to handle, as it involves removing a block that is already considered as valid.

Instead, it would make more sense to me if we didn't mark as valid a block with a runtime upgrade containing a bad runtime.
In terms of API, we can make the verification return an Option<HostVmPrototype> containing the new runtime.

Note that in practice, runtimes call Core_runtimeVersion on the new runtime when they detect an upgrade, thereby ensuring that the new runtime is indeed valid.
At the moment, the intermediary runtime compiled in Core_runtimeVersion is thrown away, which means that each runtime is compiled twice (once during the block verification and once afterwards).
By tackling this issue, we could cache the intermediary runtime compiled in Core_runtimeVersion, and return it later, therefore saving one compilation.

Make it produce blocks

Here's a small roadmap before a node can produce blocks:

  • Change a bit sync::full_optimistic to allowing access to the various caches regarding the best block (virtual machine, storage, merkle root calculation).
  • Add the code in author that claims a slot and seals the block (#247)
  • Add a transaction pool (#11)
  • Add a keystore (#154)
  • Finish the networking to allow to gossiping out the newly-created block.
  • Finish the networking to allow to answering incoming block requests about the newly-created block.
  • Change sync_service.rs to pause the sync'ing during a claimable slot and instead produce a new block.
  • In the future, support offchain workers (#73). Because of the imonline mechanism, we'd get slashed without offchain workers.

Reduce wasm code size

A few things that can be done:

Call to `chain_getHeader` returns null

Creating this issue here to document my observations:

When trying to call author_submitExtrinsic using polkadotJS, the SL node receives 3 calls:

Request 1:
{"id":8,"jsonrpc":"2.0","method":"state_subscribeStorage","params":[["0x26aa394eea5630e07c48ae0c9558
Response 1:

Response: {"jsonrpc":"2.0","method":"state_storage","params":{"subscription":"1","result":{"block":"0x1474bb43891a2e4f50ecfdf61937ede1d49e7ad188dbfeedec827fe47ad28c15","changes":[["0x26aa394eea5630e07c48ae0c9558cef7b99d880ec681799c0cf30e8886371da9a878be6bb31474b3a62564a1fe0189a8e07f77a9dda9f43d27763f5f6d1883b467bbd237777ddd9063b54e22adb2b816",null]]}}}

Request 2:
{"id":9,"jsonrpc":"2.0","method":"chain_getHeader","params":[]}
Response:

Response: {"jsonrpc":"2.0","id":9,"result":{"parentHash":"0x1474bb43891a2e4f50ecfdf61937ede1d49e7ad188dbfeedec827fe47ad28c15","extrinsicsRoot":"0xb1efb6af6ed24680e628703d77e6657c8d906d5e39ddf89d7fb9364f368cc8ce","stateRoot":"0x1e6d18c7e2c6aee52336100523a915b1bcc8d2a09730bd915478e289ed00174c","number":"0x2fa1ee","digest":{"logs":["0x064241424534020b000000c950f30f00000000","0x05424142450101928b3af0104851b4047fea61ddf96ca98069ded3bd49e710ef669ed8d3a5a27abecb6d1b9f76cc3e519cf427f0dceb63c30744ec1194c18fa197499cfe3b2385"]}}}

Request 3:
{"id":10,"jsonrpc":"2.0","method":"chain_getFinalizedHead","params":[]}
Response

{"jsonrpc":"2.0","id":10,"result":"0x1474bb43891a2e4f50ecfdf61937ede1d49e7ad188dbfeedec827fe47ad28c15"}

That last response triggers polkadotJs to try to fetch the header of the hash contained in the response:
Request
{"id":11,"jsonrpc":"2.0","method":"chain_getHeader","params":["0x1474bb43891a2e4f50ecfdf61937ede1d49...
Response

{"jsonrpc":"2.0","id":11,"result":null}

This null value is returned because the provided hash (last known finalized block at the time) was not found in client.known_blocks:
https://github.com/paritytech/substrate-lite/blob/39e321212cf89183803008feada85459c3f450d3/bin/wasm-node/rust/src/lib.rs#L317-L323

As far as i can see, known_blocks is an LruCache with a maximum capacity of 256 which is reached quickly that the library itself starts evicting items. The most probable item to be evicted is finalized_block since it is the least used in the short period of time that is required to reach capacity of the cache.

Add support for Aura consensus algorithm

It would be great to add support for Aura, as many Substrate chains use it.

While implementing Aura is quite simple in the absolute, the difficulty of this issue is modifying the code in order to handle the possibility of multiple different consensus algorithms.

Smoldot crashes when issuing a state_getStorage request (JSON RPC errors not implemented yet)

Smoldot crashes when queriying the storage and the chain hasn't synced yet:

Provider log

2021-01-22 13:55:16 SMOLDOT-PROVIDER: calling state_getStorage {"id":8,"jsonrpc":"2.0","method":"state_getStorage","params":["0xf0c365c3cf59d671eb72da0e7a4113c49f1f0515f462cdcf84e0f1d6045dfcbb"]}

<snip>

  Uncaught exception in src/examples/api.test.ts

  'panicked at \'not yet implemented: StorageRetrieval { errors: [] }\', bin/wasm-node/rust/src/lib.rs:595:30'

It would be obviously useful to get an error response written back to the javascript instead of a crash. I'm opening this ticket so I can track when it's implemented.

https://github.com/paritytech/smoldot/blob/main/bin/wasm-node/rust/src/lib.rs#L595

state_getStorage RPC request crashes the light client

PolkadotJS WsProvider -> Smoldot WASM node (demo.js)

Client snippet to reproduce issue

import { ApiPromise, WsProvider } from '@polkadot/api';
const provider = new WsProvider('ws://127.0.0.1:9944/');

const api = await ApiPromise.create({ provider });
const { nonce, data: balance } = await api.query.system.account('5FHyraDcRvSYCoSrhe8LiBLdKmuL9ptZ5tEtAtqfKfeHxA4y');

Logs from PolkadotJS

2021-01-21 14:39:16          API-WS: calling state_getStorage {"id":8,"jsonrpc":"2.0","method":"state_getStorage","params":["0xf0c365c3cf59d671eb72da0e7a4113c49f1f0515f462cdcf84e0f1d6045dfcbb"]}
2021-01-21 14:39:16          API-WS: received {"jsonrpc":"2.0","id":7,"result":"0"}
2021-01-21 14:39:16          API-WS: received {"jsonrpc":"2.0","id":7,"result":{"spec_name":"polkadot","impl_name":"smoldot","authoring_version":0,"spec_version":23,"impl_version":0,"transaction_version":4}}
2021-01-21 14:39:16          API-WS: Unable to find handler for id=7
2021-01-21 14:39:17          API-WS: disconnected from ws://127.0.0.1:9944/: 1006:: Connection dropped by remote peer.
2021-01-21 14:39:18          API-WS: socket error _Event {

Logs from WASM client demo.js

Thu Jan 21 2021 14:25:53 GMT+0000 (GMT) Connection accepted.
Received Message: {"id":1,"jsonrpc":"2.0","method":"chain_getBlockHash","params":[0]}
Sending back: {"jsonrpc":"2.0","id":1,"result":"0x329843be419f87d0247d7d043c204316248fae05da044fed932b45a7f01af72e…
Received Message: {"id":2,"jsonrpc":"2.0","method":"state_getRuntimeVersion","params":[]}
Sending back: {"jsonrpc":"2.0","id":2,"result":{"spec_name":"polkadot","impl_name":"smoldot","authoring_version":0…
Received Message: {"id":3,"jsonrpc":"2.0","method":"system_chain","params":[]}
Received Message: {"id":4,"jsonrpc":"2.0","method":"system_properties","params":[]}
Sending back: {"jsonrpc":"2.0","id":3,"result":"Westend"}
Sending back: {"jsonrpc":"2.0","id":4,"result":{"ss58Format":42,"tokenDecimals":12,"tokenSymbol":"WND"}}
Received Message: {"id":5,"jsonrpc":"2.0","method":"rpc_methods","params":[]}
Sending back: {"jsonrpc":"2.0","id":5,"result":{"version":1,"methods":["account_nextIndex","author_hasKey","author…
Received Message: {"id":6,"jsonrpc":"2.0","method":"state_getMetadata","params":[]}
Sending back: {"jsonrpc":"2.0","id":6,"result":"0x6d6574610b641853797374656d011853797374656d3c1c4163636f756e740101…
Received Message: {"id":7,"jsonrpc":"2.0","method":"state_subscribeRuntimeVersion","params":[]}
Sending back: {"jsonrpc":"2.0","id":7,"result":"0"}
Sending back: {"jsonrpc":"2.0","id":7,"result":{"spec_name":"polkadot","impl_name":"smoldot","authoring_version":0…
Received Message: {"id":8,"jsonrpc":"2.0","method":"state_getStorage","params":["0xf0c365c3cf59d671eb72da0e7a4113c49f1…

file:///home/raoul/code/paritytech/substrate-lite/bin/wasm-node/javascript/index.js:59
        throw message;
        ^
panicked at 'not yet implemented', bin/wasm-node/rust/src/lib.rs:595:28
(Use `node --trace-uncaught ...` to show where the exception was thrown)

Get rid of ChainInformationConfig and BabeGenesisConfiguration

At the moment, the state of the consensus engine is split in two: a "chain configuration" that can be retrieved using the genesis block, and a "current state".

In order to support the light client checkpoint system, and in order to make the code more future-proof, we should get rid of the "chain configuration" part and move everything in there to the "current state" part.

Support nested calls in virtual machine

In the Substrate sandboxing API, the runtime can call a function in a separate VM (the guest), and this guest can in return call functions in its supervisor. These "callbacks" happen while the invoke function (that invokes the guest) is still in progress.

In order to be able to implement this, the virtual machine API needs to support being able to call a function while an existing one is in an interrupted state.

Add a checkpoint warp sync system to the full node

We now have chain specs that contain the state of the chain at the finalized block.

It can't be used by full nodes at the moment, because they're missing the storage of the finalized block.
However, we can add a system that makes full nodes download the storage from other nodes, for instance through the existing storage proofs system.

Implement outgoing pings

At the moment we don't detect when a TCP connection is dead (for example because the Internet connection has dropped).
Substrate periodically sends a ping to every remote for this purpose, but we don't currently do that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.