GithubHelp home page GithubHelp logo

spacemeshos / poet Goto Github PK

View Code? Open in Web Editor NEW
22.0 19.0 13.0 1.7 MB

Spacemesh PoET service reference implementation

License: MIT License

Go 96.85% Dockerfile 0.17% Makefile 2.77% Shell 0.22%
golang go proof-of-concept proof-of-space proof-of-space-time proof-of-sequential-work

poet's Introduction

PoET

Overview

This project implements the Proofs Sequential Work (PoSW) protocol construction defined in Simple Proofs of Sequential Work, which can be used as a proxy for Proof of Elapsed Time (PoET). We follow the paper's definitions, construction and are guided by the reference python source code implementation.

The executable package in this repository is the PoET service, which harness the core protocol in order to provide proving services to multiple clients, with amortized CPU cost. It is designed to be used by the Spacemesh network and is part of the broader Spacemesh protocol.

For more information, visit the PoET protocol specification.

Build

Since the project uses Go 1.11's Modules it's best to place the code outside your $GOPATH. Read this for alternatives.

git clone https://github.com/spacemeshos/poet.git
cd poet
go build

Run the service

New epoch every 60s, poet stops execution 5s before next round starts execution for next epoch

./poet --genesis-time=2022-08-09T09:21:19+03:00 --epoch-duration=60s --cycle-gap=5s

--genesis-time is set according to RFC3339.

Use the sample configuration file

./poet --configfile=$PWD/sample-poet.conf

Show the help message

./poet --help

Run the tests

go test ./...

Specifications

Section numbers in Simple Proofs of Sequential Work are referenced by this spec.

Constants (See section 1.2)

  • t:int = 150. A statistical security parameter. (Note: is 150 needed for Fiat-Shamir or does 21 suffices?). Shared between prover and verifier

  • w:int = 256. A statistical security parameter. Shared between prover and verifiers

  • n:int - time parameter. Shared between verifier and prover

Note: The constants are fixed and shared between the Prover and the Verifier. Values shown here are just an example and may be set differently for different POET server instances.

Definitions (See section 4, 5.1 and 5.2)

  • x: {0,1}^w = rndBelow(2^w - 1) - verifier provided input statement (commitment)

  • N:int - number of iterations. N := 2^(n+1) - 1. Computed based on n. For the purposes of the implementation N is determined after running poet for a specific amount of time.

  • m:int , 0 <= m <= n. Defines how much data should be stored by the prover.

  • M : Storage available to the prover, of the form (t + n*t + 1 + 2^{m+1})*w, 0 <= m <= n . For example, with w=256, n=40, t=150 and m=20 prover should use around 70MB of memory and make N/1000 queries for openH.

  • Hx : (0, 1)^{<= w(n+1)} => (0, 1)^w . Hx is constructed in the following way: Hx(i) = H(x,i) where H() is a cryptographic hash function. The implementation should use a macro or inline function for H(), and should support a command-line switch that allows it to run with either H()=sha3() or H=sha256().

  • φ : (phi) value of the DAG Root label l_epsilon computed by PoSW^Hx(N)

  • φP : (phi_P) Result of PoSW^Hx(N) stored by prover. List of labels in the m highest layers of the DAG.

  • γ : (gamma) (0,1}^{t*w}. A random challenge sampled by verifier and sent to prover for interactive proofs. Created by concatenation of {gamma_1, ..., gamma_t} where gamma_i = rnd_in_range of (0,1)^w

  • τ := openH(x,N,φP,γ) proof computed by prover based on verifier provided challenge γ. A list of t tuples, where each tuple is defined as follows for 0 <= i < t: {label_i, list_of_siblings_to_root_from_i }. So, for each i, the tuple contains the label of the node width identifier i, as well as the labels of all siblings of the nodes on the path from the node i to the root.

  • NIP (Non-interactive proof): a proof τ created by computing openH for the challenge γ := (Hx(φ,1),...Hx(φ,t)). e.g. without receiving γ from the verifier. Verifier asks for the NIP and verifies it like any other openH using verifyH.

  • verifyH(x,N,φ,γ,τ) ∈ {accept, reject} - function computed by verifier based on prover provided proof τ.

Actors

  • Prover: The service providing proofs for verifiers
  • Verifier: A client using the prover by providing the input statement x, and verifying the prover provided proofs (by issuing random challenges or by verifying a non-interactive verifier provided proof for {PoSW^Hx(N), x}

Base Protocol Test Use Cases

User case 1 - Basic random challenges verification test

  1. Verifier generates random commitment x and sends it to the prover
  2. Prover computes PoSWH(x,N) by making N sequential queries to H and stores φP
  3. Prover sends φ to verifier
  4. Verifier creates a random challenge γ and sends it to the prover
  5. Prover returns proof τ to the verifier
  6. Verifier computes verifyH() for τ and outputs accept

Use case 2 - NIP Verification test

  1. Verifier generates random commitment x and sends it to the prover
  2. Prover computes PoSWH(x,N) by making N sequential queries to H and stores φP
  3. Prover creates NIP and sends it to the verifier
  4. Verifier computes verifyH() for NIP and outputs accept

User case 3 - Memory requirements verification

  • Use case 1 with a test that prover doesn't use more than M memory

User case 4 - Bad proof detection

  • Modify use case 1 where a random bit is changed in τ proof returned to the verifier by the prover
  • Verify that verifyH() outputs reject for the verifier

User case 5 - Bad proof detection

  • Modify use case 2 where a random bit is changed in the NIP returned to the verifier by the prover
  • Verify that verifyH() outputs reject for the verifier

Theoretical background and context

Related work

Implementation Guidelines

DAG

The core data structure used by the verifier.

DAG Definitions (See section 4)
  • We define n as the depth of the DAG. We set N = 2^(n+1)-1 where n is the time param. e.g. for n=4, N = 31

  • We start with Bn - the complete binary tree of depth n where all edges go from leaves up the tree to the root. Bn has 2^n leaves and 2^n -1 internal nodes. We add edges to the n leaves in the following way:

    For each leaf i of the 2^n leaves, we add an edge to the leaf from all the direct siblings of the nodes on the path from the leaf to the root node.

    In other words, for every leaf u, we add an edge to u from node v_{b-1}, iff v_{b} is an ancestor of u and nodes v_{b-1}, v_{b} are direct siblings.

  • Each node in the DAG is identified by a binary string in the form 0, 01, 0010 based on its location in Bn. This is the node id

  • The root node at height 0 is identified by the empty string ""

  • The nodes at height 1 (l0 and l1) are labeled 0 and 1. The nodes at height 2 are labeled 00, 01, 10 and 11, etc... So for each height h, node's id is an h bits binary number that uniquely defines the location of the node in the DAG

  • We say node u is a parent of node v if there's a direct edge from u to v in the DAG (based on its construction)

  • Each node has a label. The label li of node i (the node with id i) is defined as:

li = Hx(i,lp1,...,lpd)` where `(p1,...,pd) = parents(i)

For example, the root node's label is lε = Hx(bytes(""), l0, l1) as it has 2 only parents l0 and l1 and its id is the empty string "".

Implementation Note: packing values for hashing
  • To pack an identifier e.g. "00111" value for the input of Hx(), encoded it as a byte array as utf-8 bytes array. For example, in Go use: []byte("00011")
  • Labels are arbitrary 32 bytes of binary data so they don't need any encoding.
  • As an example, to compute the input for Hx("001", label1, label2), encode the binary string to a utf-8 encoded bytes array and append to it the labels byte arrays.
Computing node parents ids

Given a node i in a DAG(n), we need a way to determine its set of parent nodes. For example, we use the set to compute its label. This can be implemented without having to store all DAG edges in storage.

Note that with this binary string labeling scheme we get the following properties:

  1. The id of left sibling of a node in the DAG is node i label with the last (LSB) bit flipped from 1 to 0. e.g. the left sibling of node with id 1001 is 1000
  2. The id of a direct parent in Bn of a node i equals to i with the last bit removed. e.g. the parent of node with id 1011 is 101
  • Using these properties, the parents ids can be computed based only on the DAG definition and the node's identifier by the following algorithm:

If id has n bits (node is a leaf in DAG(n)) then add the ids of all left siblings of the nodes on the path from the node to the root, else add to the set the 2 nodes below it (left and right nodes) as defined by the binary tree Bn.

  • So for example, for n=4, for the node l1 with id 0, the parents are the nodes with ids 00 and 01 and the ids of the parents of leaf node 0011 are 0010 and 000. The ids of the parents of node 1101 are 1100, 10 and 0.

  • Note that leaf nodes parents are not all the siblings on the path to the root from the leaf. The parents are only the left siblings on that path. e.g. siblings with ids that end with a 0.

DAG Construction (See section 4, Lemma 3)
  • Compute the label of each DAG node, and store only the labels of the DAG from the root up to level m
  • Computing the labels of the DAG should use up to w * (n + 1) bits of RAM
  • The following is a possible algorithm that satisfies these requirements. However, any implementation that satisfies them (with equivalent or better computational complexity) is also acceptable.

Recursive computation of the labels of DAG(n):

  1. Compute the labels of the left subtree (tree with root l0)
  2. Keep the label of l0 in memory and discard all other computed labels from memory
  3. Compute the labels of the right subtree (tree with root l1) - using l0
  4. Once l1 is computed, discard all other computed labels from memory and keep l1
  5. Compute the root label le = Hx("", l0, l1)
  • When a label value is computed by the algorithm, store it in persistent storage if the label's height <= m.
  • Note that this works because only l0 is needed for computing labels in the tree rooted in l1. All of the additional edges to nodes in the tree rooted at l1 start at l0.
  • Note that the reference Python code does not construct the DAG in this manner and keeps the whole DAG in memory. Please use the Python code as an example for simpler constructions such as binary strings, open and verify.
DAG Storage

Please use a binary data file to store labels and not a key/value db. Labels can be stored in the order in which they are computed.

Given a node id, the index of the label value in the data file can be computed by:

idx = sum of sizes of the subtrees under the left-siblings on path to root + node's own subtree.

The size of a subtree under a node is simply 2^{height+1}-1 * the label size. e.g. 32 bytes.

Data Types

Commitment

arbitrary length bytes. Verifier implementation should just use H(commitment) to create a commitment that is in range (0, 1)^w . So the actual commitment can be sha256(commitment) when w=256.

Challenge

A challenge is a list of t random binary strings in {0,1}^n. Each binary string is an identifier of a leaf node in the DAG. Note that the binary string should always be n bytes long, including trailing 0s if any, e.g. 0010.

Proof (See section 5.2)

A proof needs to include the following data:

  1. φ - the label of the root node.
  2. For each identifier i in a challenge (0 <= i < t), an ordered list of labels which includes: 2.1 li - The label of the node i 2.2 An ordered list of the labels of the sibling node of each node on the path to the parent node, omitting siblings that were already included in previous siblings list int he proof

So, for example for DAG(4) and for a challenge identifier 0101 - The labels that should be included in the list are: 0101, 0100, 011, 00 and 1. This is basically an opening of a Merkle tree commitment.

The complete proof data can be encoded in a tuple where the first value is φ and the second value is a list with t entries. Each of the t entries is a list starting with the node with identifier_t label, and a node for each sibling on the path to the root from node identifier_t:

{ φ, {list_of_siblings_on_path_to_root_from_0}, .... {list_of_siblings_on_path_to_root_from_t} }

Note that we don't need to include identifier_t in the proof as the identifiers needs to be computed by the verifier.

Also note that the proof should omit from the siblings list labels that were already included previously once in the proof. The verifier should create a dictionary of label values keyed by their node id, populate it from the siblings list it receives, and use it to get label values omitted from siblings lists - as the verifier knows the ids of all siblings on the path to the root from a given node.

About NIPs

a NIP is a proof created by computing openH for the challenge γ := (Hx(φ,1),...Hx(φ,t)). e.g. without receiving γ from the verifier. Verifier asks for the NIP and verifies it like any other openH using verifyH. Note that the prover generates a NIP using only Hx(), t (shared security param) and φ (generated by PoSW(n)). To verify a NIP, a verifier generates the same challenge γ and verifies the proof using this challenge.

Hx(φ,i) output is w bits but each challenge identifier should be n bits long. To create an identifier from Hx(φ,i) we take the leftmost t bits - starting from the most significant bit. So, for example, for n == 3 and for Hx(φ,1) = 010011001101..., the identifier will be 010.

GetProof(challenge: Challenge) notes

The verifier only stores up to m layers of the DAG (from the root at height 0, up to height m) and the labels of the n leaves. Generating a proof involves computing the labels of the siblings on the path from a leaf to the DAG root where some of these siblings are in DAG layers with height > m. These labels are not stored by the prover and need to be computed when a proof is generated. The following algorithm describes how to compute these siblings:

  1. For each node_id included in the input challenge, compute the node id of the node n. The node on the path from node_id to the root at DAG level m

  2. Construct the DAG rooted at node n. When the label of a sibling on the path from node_id to the root is computed as part of the DAG construction, add it to the list of sibling labels on the path from node_id to the root

Computing Hx(y,z) - Argument Packing

When we hash 2 (or more) arguments to hash, for example Hx(φ,1). We need to agree on a canonical way to pack params across implementations and tests into 1 input bytes array. We define argument packing in the following way:

  • Each argument is serialized to a []byte.
  • String serialization: utf-8 encoding to a byte array
  • Int serialization (uint8, ... uint64): BigEndian encoding to a byte array
  • The arguments are concatenated by appending the bytes of each argument to the bytes of the previous argument. For example, Hx(φ,i) = Hx([]byte(φ) ... []byte(i))

poet's People

Contributors

amitshaul avatar antonlerner avatar avive avatar beckmani avatar countvonzero avatar dependabot[bot] avatar dshulyak avatar evt avatar fasmat avatar gavraz avatar lrettig avatar moshababo avatar noamnelke avatar omahs avatar pigmej avatar poszu avatar schinzelh avatar waterman1997 avatar y0sher avatar zalmen avatar zhiqiangxu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

poet's Issues

Packages refactoring

  • Extract verifier code from internal to a separated verifier package.
  • Rename internal package as prover, and extract shared code to shared/common package.

PoET discovery

Overview / Motivation

We want to support multiple PoET services and allow a miner to discover them and select between them.

The Task

TODO: Clearly describe the issue requirements here...

Implementation Notes

TODO: Add links to relevant resources, specs, related issues, etc...

Contribution Guidelines

Important: Issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity. We will not assign tasks to developers who have'nt introduced themselves on our Gitter dev channel

  1. Introduce yourself on go-spacemesh dev chat channel - ask our team any question you may have about this task
  2. Fork branch develop to your own repo and work in your repo
  3. You must document all methods, enums and types with godoc comments
  4. You must write go unit tests for all types and methods when submitting a component, and integration tests if you submit a feature
  5. When ready for code review, submit a PR from your repo back to branch develop
  6. Attach relevant issue to PR

PoET core and service

Build the client/server protocol over the POET core implementation to serve multiple miners per one sequential work.

PoET CI should be executed on Windows and MacOS

Summary

PoET is used as a dependency for go-spacemesh. Since go-spacemesh is built for Linux, MacOS and Windows, PoET should also be able to be built on all 3 platforms:

Acceptance criteria

  • Update CI to build and run tests on all 3 platforms
    • Add standard set of linters & checkers in CI (staticcheck, golangci-lint, test-fmt, etc.)
    • Report passing & failing tests using JUnit Report
  • Target Go 1.19 instead of 1.18
  • Fix existing tests that might fail on non-Linux platforms

Log messages as JSON

In order to parse poet log messages by ES we need to have those logs as JSON.
Use logs infra from go-spacemesh to print the log messages as JSON when configuring --test-mode execution parameter.

Faulty optimization in PoET verification

In verifier.Verify, the optimization that marks the proof as valid if we already handled one of the siblings on the way to the root is wrong. The fact that we got to a known sibling says nothing on the validity of the path that we already calculated (i.e. from the leaf to the current position). A correct optimization will require storing in the cache sibling pairs (instead of only one of them) and if we got to a pair we can indeed stop the verification and mark the proof as valid.

Docker push workflow triggered on PR instead of on merge

With the introduction of dependabot a small issue popped up; PRs created with dependabot failed their CI builds (e.g. this one: #117)

The reason for that is that dependabot doesn't have access to the same set of secrets as PRs created from others have. So it cannot authenticate against Dockerhub and the build fails.

The question here is should we move the "Docker build & push" step out of the CI workflow and move it into one that is only triggered AFTER the merge of a PR, instead of on every commit? At the moment a new image is released every time the image build has been successful, even if tests failed.

@dshulyak @lrettig @evt

Integration with node broken

Recent updates #66 and #67 broken integration with the go-spacemesh node. please check the node and run tests before or atleast after merging changes.

Those changes break all integration in a way that go-spacemesh cannot be tested right now.

Every execution round should follow with a idle period

The Poet period should consist of an execution part where the PoSW is executed and an idle period during which the poet will wait for the beginning of the next execution round. The purpose of the idle period is to allow the miners to generate a PoST proof and get ready for the next round of PoET (assuming that a miner will mostly work with the same PoET for many epochs)

Verifier error handling

The verifier currently crashes (panic) if is given incorrect values (invalid proof). Need to gracefully report these error instead.

failed creating poet client harness: error during killing process: exit status 123 | kill: you need to specify whom to kill

Alpine Linux (which we use to run our tests in docker) has a funny version of lsof installed that doesn't tell the truth:

/go/src/github.com/spacemeshos/go-spacemesh # lsof -i tcp:18550 | grep LISTEN | awk '{print $2}' | xargs kill -9
kill: you need to specify whom to kill
/go/src/github.com/spacemeshos/go-spacemesh # apk add busybox-extras
(1/1) Installing busybox-extras (1.31.1-r19)
Executing busybox-extras-1.31.1-r19.post-install
Executing busybox-1.31.1-r16.trigger
OK: 160 MiB in 44 packages
/go/src/github.com/spacemeshos/go-spacemesh # busybox-extras telnet localhost 18550
Connected to localhost
@hello
/go/src/github.com/spacemeshos/go-spacemesh # lsof -i tcp:18550
1       /bin/busybox    /dev/pts/0
1       /bin/busybox    /dev/pts/0
1       /bin/busybox    /dev/pts/0
1       /bin/busybox    /dev/tty
2941    /tmp/poet/poet  /dev/null
2941    /tmp/poet/poet  /dev/null
2941    /tmp/poet/poet  pipe:[865420]
2941    /tmp/poet/poet  /root/.poet/logs/poet.log
2941    /tmp/poet/poet  anon_inode:[eventpoll]
2941    /tmp/poet/poet  pipe:[862641]
2941    /tmp/poet/poet  pipe:[862641]
2941    /tmp/poet/poet  socket:[862642]
2941    /tmp/poet/poet  socket:[862646]
2941    /tmp/poet/poet  socket:[864392]
2941    /tmp/poet/poet  socket:[863516]
2941    /tmp/poet/poet  /tmp/poet/data/1680/challengesDb/LOCK
2941    /tmp/poet/poet  /tmp/poet/data/1680/challengesDb/LOG
2941    /tmp/poet/poet  /tmp/poet/data/1680/challengesDb/MANIFEST-000000
2941    /tmp/poet/poet  /tmp/poet/data/1680/challengesDb/000001.log
2941    /tmp/poet/poet  /tmp/poet/data/1679/layercache_0.bin (deleted)
/go/src/github.com/spacemeshos/go-spacemesh # apk add lsof
(1/1) Installing lsof (4.93.2-r0)
Executing busybox-1.31.1-r16.trigger
OK: 160 MiB in 45 packages
/go/src/github.com/spacemeshos/go-spacemesh # lsof -i tcp:18550
COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
poet    2941 root   11u  IPv4 862642      0t0  TCP localhost:18550 (LISTEN)
poet    2941 root   13u  IPv4 864392      0t0  TCP localhost:50380->localhost:18550 (ESTABLISHED)
poet    2941 root   14u  IPv4 863516      0t0  TCP localhost:18550->localhost:50380 (ESTABLISHED)
/go/src/github.com/spacemeshos/go-spacemesh # lsof -i tcp:18550 | grep LISTEN | awk '{print $2}'
2941
/go/src/github.com/spacemeshos/go-spacemesh # lsof -i tcp:18550 | grep LISTEN | awk '{print $2}' | xargs kill -9
/go/src/github.com/spacemeshos/go-spacemesh # lsof -i tcp:18550
/go/src/github.com/spacemeshos/go-spacemesh # 

This causes problems inside the harness:

args := fmt.Sprintf("lsof -i tcp:%d | grep LISTEN | awk '{print $2}' | xargs kill -9", addr.Port)

I see this error from several of the go-spacemesh tests in github.com/spacemeshos/go-spacemesh/cmd/node including this one:

/go/src/github.com/spacemeshos/go-spacemesh # go test -timeout 0 -p 1 -count 1 -v github.com/spacemeshos/go-spacemesh/
cmd/node -run Test_PoETHarnessSanity
=== RUN   Test_PoETHarnessSanity
    Test_PoETHarnessSanity: app_test.go:72:
                Error Trace:    app_test.go:72
                Error:          Received unexpected error:
                                error during killing process: exit status 123 | kill: you need to specify whom to kill
                Test:           Test_PoETHarnessSanity
--- FAIL: Test_PoETHarnessSanity (1.16s)
FAIL
FAIL    github.com/spacemeshos/go-spacemesh/cmd/node    1.184s
FAIL

This code should make sure the output of the shell command here is not null before passing it to kill.

Out of memory crash

Either a memory leak or out of memory issue in the current code-base. n=33, 32GB Ram. Crashes at 75% DAG generation. Blocks further benchmarks. We need a better way to generate the dag than the naive in-memory depth-first traversal that is currently implements. @zalmen @moshababo @noamnelke

Cleanup proof endpoints

Now that PoET sends membership and PoET proofs via gossip, we should clean up the old gRPC endpoints for obtaining proofs. This may require changing how some tests work.

Re-broadcast whenever new gateway nodes are set

@noamnelke wrote:

When all PoET server attempts to broadcast a proof fail we currently can’t re-broadcast a proof without restarting the PoET server. This can easily be improved by initiating a re-broadcast whenever new gateway nodes are set via the API.

Use the new merkle-tree implementation

The number of leafs to prove depends on the security parameter constant (T).
merkle-tree can be used to create more efficient proof of multiple leafs, specially for the verifier which currently use a key-value cache to detect repeating nodes in the tree.

Add retries to broadcast failure

When round broadcast is failing, the round stays in "unbroadcasted" state, and only the recovery mechanism can initiate another attempt to broadcast. This is not enough, and we should introduce delayed retries after the initial failure.

service.Info() accesses resources concurrently without protecting

service.go:Info()
reads from openRound and executingRounds while the routine started at Start
writes to these variables. there's even protection (mutex) on executingRounds inside Start
but none when reading in Info().
It is important to say that Info() can be called at any time as it is triggered by an API call.

cc: @moshababo

Add support for multiple sha256() per Hx()

Add a server param - iter - the number of sequential iterations per Hx() when using sha256(), so we shift total cpu time of proof building to sha256() computations.

func (h *sha256Hash) Hash(data ...[]byte) []byte {

	h.hash.Reset()
	h.hash.Write(h.x)

	for _, d := range data {
		_, _ = h.hash.Write(d)
	}

	temp := h.hash.Sum([]byte{})

	for i := 0; i < h.iters; i++ {
		h.hash.Reset()
		h.hash.Write(h.x)
		h.hash.Write(temp)
		temp = h.hash.Sum(h.emptySlice)
	}

	return temp
}

Add versioning, tagging, docker push

We ran into an issue today where after #105 was merged, go-spacemesh system tests started failing because they don't pin the poet version (see spacemeshos/go-spacemesh#2189). In order to facilitate pinning, we should add versioning here and set up automatic dockerbuild/push when a new version is merged or a new release is cut.

Clean the prover file when execution stops

The prover key-value storage file is cleaned after execution finished and the NIP was generated.
Need to clean the file also when the execution was interrupted and didn't finish.

Open round recovery policy

Background

  • PoET Service starts with the opening of round 1 (opening duration is specified by the initialduration mandatory flag).
  • Once duration elapsed, round 1 starts executing, and round 2 is opening.
  • Round 2 start of execution (and round 3 opening) is defined by rounds scheduling (duration optional flag). If it happens before round 1 end of execution, there would be 2 rounds executing in-parallel. If after, there would be a period in which no round would be executing. Current behaviour is to start round 2 execution on round 1 end of execution if scheduling time is after or not defined (so there's always at least 1 execution).

Options for when attempting to recover a round in “open” state (after server crash or shutdown):

  1. Execute it immediately and open a new round instead.
  2. Keep it open until the previous round end of execution, regardless of the opening schedule.
  3. If schedule is/was defined, keep it open for the remaining duration, including the duration in which the server was down. If elapsed, execute it immediately.
  4. If schedule is/was defined, keep it open for the remaining duration, not including the duration in which the server was down. This might create a time drift in the overall scheduling.

Option 1 is currently implemented. This creates an issue of creating an additional parallel round execution for every server restart attempt, and thus should be changed.

Main consideration is to whether we plan to use the rounds scheduling or just start executing each round on the previous round end of execution. If the first, options 2-4 are viable. If the later, option 2 is the fallback and need to be implemented anyway.

Please provide feedback for the desired behaviour.

Make PoET thread safe

One case where we're not thread safe is when closing a round -- we may be losing the last membership submissions if they're coming in the last moment.

A thorough review is in order.

Document usage

There's currently no documentation on how to run the POET server. Add at least a basic "# Running the server" section to the README. It should contain instructions on how to connect go-spacemesh to it.

Happy to work on this.

@moshababo, does anything special need to be done? When I run poet locally, I noticed that it's looking for a poet.conf. Can we add a sample of this to the repo? Do I need to supply any commandline args? Or should go-spacemesh connect to it (on the default port) with no special params/config required?

Implement sha256 using Intel sha256 instructions

This is an epic - should be broken down to several tasks

Background

  • Intel accelerated crypto instructions define sha256 extension instructions - they have not been widely implemented on the last few generations of Intel CPUs
  • AMD ships both server and desktop CPUs with the sha extension instructions (EPYC and RYZEN)
  • Servers with EPYC cpus are now available on [AWS](See [Epyc](https://aws.amazon.com/about-aws/whats-new/2018/11/introducing_amazon_ec2_instances_featuring_amd_epyc_processors/ available), and are good candidates to run Spacemesh production poet for the testnet timeframe
  • Benchmarks show x3-x5 pref over optimized sha256 using AVX2 Intel instructions - there is no faster published sha256 hardware benchmark that I could find
  • There are already several open source libraries that use the SHA instructions to compute SHA256 for a small input buffer (add links)

Tasks

  • Integrate and test sha256() using Intel sha256 instructions using assembly code published by open source libraries
  • The hardware optimized sha256 should be automatically used when the cpu support SHA (cpuid=SHA)...
  • Security and correctness review the implementation by our research team
  • Test and profile on both RYZEN (desktop) and EPYC (server) AMD systems

@iddo333 @moshababo @zalmen @tal-m

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.