GithubHelp home page GithubHelp logo

finschia / ostracon Goto Github PK

View Code? Open in Web Editor NEW
70.0 12.0 28.0 68.05 MB

Ostracon, a consensus algorithm, is forked from Tendermint Core. We have added VRF to Tendermint BFT. It adds randomness to PoS Validator elections and improves security.

License: Apache License 2.0

Shell 0.50% Dockerfile 0.15% Makefile 0.46% Go 98.70% Python 0.17% HTML 0.02%
blockchain golang tendermint

ostracon's Introduction

Finschia

codecov license LoC Go Report Card GolangCI

This repository hosts Finschia. This repository is forked from gaia at 2021-03-15. Finschia is a mainnet app implementation using finschia-sdk, ostracon, wasmd and ibc-go.

Node: Requires Go 1.20+

Warnings: Initial development is in progress, but there has not yet been a stable.

Quick Start

Docker

Build Docker Image

make docker-build                # build docker image

or

make docker-build WITH_CLEVELDB=yes GITHUB_TOKEN=${YOUR_GITHUB_TOKEN}  # build docker image with cleveldb

Note1

If you are using M1 mac, you need to specify build args like this:

make docker-build ARCH=arm64

Configure

sh init_single.sh docker          # prepare keys, validators, initial state, etc.

or

sh init_single.sh docker testnet  # prepare keys, validators, initial state, etc. for testnet

Run

docker run -i -p 26656:26656 -p 26657:26657 -v ${HOME}/.finschia:/root/.finschia finschia/finschianode fnsad start

Local

Build

make build
make install 

Configure

sh init_single.sh

or

sh init_single.sh testnet  # for testnet

Run

fnsad start                # Run a node

visit with your browser

Localnet with 4 nodes

Run

make localnet-start

Stop

make localnet-stop

How to contribute

check out CONTRIBUTING.md for our guidelines & policies for how we develop Finschia. Thank you to all those who have contributed!

ostracon's People

Contributors

alexanderbez avatar brapse avatar caffix avatar cmwaters avatar cwgoes avatar dependabot-preview[bot] avatar dependabot[bot] avatar ebuchman avatar erikgrinaker avatar ethanfrey avatar greg-szabo avatar jaekwon avatar kokeshim0chi avatar liamsi avatar mappum avatar mdaiki0730 avatar melekes avatar odeke-em avatar petabytestorage avatar rigelrozanski avatar tac0turtle avatar tessr avatar thanethomson avatar tnasu avatar torao avatar ulbqb avatar valardragon avatar xla avatar zmanian avatar zramsay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ostracon's Issues

Build test environments

It is necessary to build a system to test the developed server.

The reason to need test environment

  • In order to check if multi nodes of test version is working well.
  • In order to simulate large nodes
  • To make sure there are no problems during the long term node consensus process

Requirement

  • Need to be able to perform large scale test on a minimum of 4 nodes and a maximum of 100 or over.
  • Need the capability to monitor each test nodes.

Error building Tendermint using VRF feature

An error appears in full build of Tendermint modificatin version using VRF library. E.g., to build feature/add_vrf_proving_to_privvalidator:

$ export GOPATH=~/go-cleanbuild
$ export PATH=$GOPATH/bin:$PATH
$ cd ~/git
$ git clone -b feature/add_vrf_proving_to_privvalidator --recursive [email protected]:line/tendermint.git tendermint
...
$ make get_tools
...
$ make
Found tools:  gox  golangci-lint  protoc-gen-gogo  certstrap
CGO_ENABLED=0 go build -mod=readonly -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD` -s -w" -tags 'tendermint' -o build/tendermint ./cmd/tendermint/
go: finding github.com/hashicorp/hcl v1.0.0
...
build github.com/tendermint/tendermint/cmd/tendermint: cannot load github.com/tendermint/tendermint/crypto/vrf/internal/vrf: no Go source files
make: *** [build] Error 1

There is an error that the newly added VRF source cannot be referenced.

I could build the master or feature/integrate_libsodium branch without any error using the same procedure. In feature/integrate_libsodium, it just added VRF functions, not used from cmd/main.go, so it was probably only compiled but not linked.

For master build,

$ export GOPATH=~/go-cleanbuild
$ export PATH=$GOPATH/bin:$PATH
$ git clone [email protected]:line/tendermint.git tendermint
Cloning into 'tendermint'...
...
$ cd tendermint
$ make get_tools
...
$ make
...
$ $GOPATH/bin/tendermint version
0.32.2-76f3db06

it seems to work well.

Workaround

Currently, I don't build fully, only do make test or go test crypto/vrf/*.go to validate my code.

Environment

  • macOS 10.14.6
  • go version go1.12.5 darwin/amd64

fix dredd test

The current dredd test is failed.
Because dredd doesn't support many things of last OpenAPI 3.
So we need to handle customize the supported hooks of dredd.

The supported hooks of dredd is defined on the tendermint/cmd/contract_tests/main.go
And the usage is https://dredd.org/en/latest/hooks/go.html#hooks-go

I'll skip or do customize some issues.

Add VRF proof and output in proposal block

The proposer needs to add its VRF proof and output (hash) to designate the proposer of the next round. In this issue, the current proposer will be modified to embed the VRF proof and output when a new proposal block is generated.

Verify the Legitimacy of Proposer/ValidatorSet by VRF

Modify codes that verifying the legitimacy of a block such that it's certain that the Proposer which generated the block (and the ValidatorSet that attached the signature to the block if it was selected by VRF) is legitimacy.

  1. Find out where Proposer and ValidatorSet are validated in the current Tendermint. Perhaps it's in the process of verifying Block.
  2. (TBC)

Opinion of Validator and Rewards on the location of each code being implemented

ValidatorSet in Tendermint is selected and informed by Cosmos-SDK. Tendermint consensus with the selected ValidatorSet and communicates the consensus results back to Cosmos-sdk. Therefore, for our algorithms, which select only a few of the total validators, this part needs to be well-organized.

So I want to organize it like this and make a suggestion.

  • Cosmos-SDK select the Validator Set like now.
  • And, among these selected validator sets, the selected validator is obtained from Tendermint to reach an consensus.
  • Then, after the selected validator and voting power have been consensused upon, let's deliver them to cosmos-sdk through EndBlock.

And the protocol shall be modified to deliver the selected validator set and the respective voting power to EndBlock.

      +-------------+
      |             |
      | LINK        |
      | Application |
      |             |
      +------+------+
             |
        +----+-----+
        |          |
        |COSMOS+SDK|
        |          |
        +--+----^--+
           |    | EndBlock
BeginBlock |    |
        +--v----+--+
        |          |
        |Tendermint|
        |          |
        +----------+

  • Code for selected validators using VRF exists in Tendermint.
  • Reward codes exist in LINK Application.

So I think it would be better to add the Reward Code to line/link/x/distribution.

Proposal: Duplicate election of a validator and deciding voting power

1. Definition of the problem

#22 and #23 propose algorithms electing proposer and validators using vrf. Both algorithms, however, have the following problems

  • 1 ) Duplicate election
    • 1-a ) Duplicate election allowed; no additional validators for consensus. The number of validator node is variable for each round.
    • 1-b ) Duplicate election disallowed; additional validators are needed. The number of validator node is always same.
  • 2 ) Deciding voting power
    • 2-a ) All elected validators have equal voting power.
    • 2-b ) Each validator has the voting power proportional to the amount of staking.

example)

  • minimum staking unit is 100
  • electing 5 validators

when following 1-a

candidate staking elected voting power 2-a voting power 2-b
c101 1000
c102 2000 v 100 2000
c103 3000 v v 200 3000
c104 500
c105 300
c106 500
c107 400 v 100 400
c108 300
c109 1000 v 100 1000
c110 300
c111 300
c112 400
total 10000 500 6400

when following 1-b

candidate staking elected voting power 2-a voting power 2-b
c101 1000 v 100 1000
c102 2000 v 100 2000
c103 3000 v 100 3000
c104 500
c105 300
c106 500
c107 400 v 100 400
c108 300
c109 1000 v 100 1000
c110 300
c111 300
c112 400
total 10000 500 7400

2. Proposal: Allowing duplicate election and giving them voting power as many as the number of election

Prerequisite

The expected voting power of the following two candidates should be the same.

  • 3-a ) s*u staking with one node (u is minimum staking unit for candidate)
  • 3-b ) u staking with s nodes

the combination of 1-b and 2-a

3-b has an advantage over 3-a.
No matter how many amount he has been staking, he can only have the voting power as many as the number of participating nodes.

2-b is unreasonable.

Expecting voting power is proportional to the square of the staking amount because the amount of the staking affects both the winning probability and the expected voting power.

the combination of 1-a and 2-a

3-a is identical to 3-b

So allowing duplicate election and giving them voting power as many as the number of election is most reasonable.

Evaluate the result of #22 and #23

In order to compare the two consensus algorithms, we suggest some results that you can see:

  1. Are you elected as a producer as much as a staking account?
  2. Are you elected as a validator as much as a staking account?
  3. Is voting power allocated as much as staking amount?
  4. When 33% of the total users are byzantine, is there no problem with the agreement as an elected validator?

4-1. If byzantine is selected as the group with less than 33% of the staking of the users with the most staking amounts in order, does the group have more than 2/3 of the total voting power in each round?
4-2. If you sort the users with the lowest staking amount in order and then select byzantine for a group that combines each staking to 33% or less of the total, does the group have more than 2/3 of the total voting power in each round?

Election of ValidatorSet based on VRF

This is a story ticket to modify the ValidatorSet election performed on the Cosmos side to a VRF-based method. Before making any modification, it should investigate whether it's able to change the algorithm for the ValidatorSet selection.

  1. Whether should we release ValidatorSet selections in the VRF together with Proposer (at the beginning of the implementation, it was assumed that the selection of ValidatorSet is left to Cosmos and Proposer is selected from among them).
  2. Find out where ValidatorSet elections are taking place in Cosmos and confirm whether the VRF hash contained in the block can be referenced.
  3. Implement if possible. (TBC)

Establish a goodness-of-fit test for Proposer and Validator election

We need to examine and confirm which goodness-of-fit test is better to verify that all nodes included in the candidates set are selected as validators followed at a frequency based on the amount of its stakes.

For instance, when the following observation frequencies are obtained in the single selection of 1,000 rounds from n=100 candidates, it is a test to judge whether it significantly follows the distribution of their stake amount.

Node i Observed freq. vi Stakes Si Expectation pi Expectation freq. npi
1 29 3 3/100 30
2 52 5 5/100 50
3 8 1 1/100 10
...
N 81 8 8/100 80

Against this problem, for example, The well-known χ² (chi-squared) test provides statistical significance from a chi-square distribution table with N-1 degrees of freedom. Therefore, such the problem will be made to be whether the result χ² value is higher than the statistical significance level (α = 0.05, 0.01, etc.).

Here are some questions.

  • The χ² test seems to have the feature that any test becomes significantly inclined as the number of observations increases, such as n=1000. Should we research the other method, such as the generalized chi-square test as an alternative method?
  • Is the χ² test suitable for a degree of freedom with an order of 100 to 100,000?

The test method discussed here could also be considered as a method for statistically verifying that malicious election has not been made in the phase of operating the blockchain.

Random Election using Stake Amount as Discrete Distribution

The scheme of selecting a Proposer and Validators based on PoS can be considered as random sampling from a group with a discrete probability distribution.

  • S: the total amount of issued stake
  • s_i: the stake amount held by a candidate i (Σ s_i = S)

Random Sampling based on Categorical Distribution

For simplicity, here is an example in which only a Proposer is selected from candidates with winning probability of p_i = s_i / S.

First, create a pseudo-random number generator using vrf_hash as a seed, and determine the threshold threshold for the Proposer. This random number algorithm should be deterministic and portable to other programming languages, but need not be cryptographic.

val rand = new Random(vrf_hash)
val threshold:Long = (rand.nextDouble() * S).toLong

Second, to make the result deterministic, we retrieve the candidates sorted in descending stake order.

val candidates:List[Candidate] = query("SELECT * FROM candidates WHERE stake > 0 ORDER BY stake DESC, public_key");

Finally, find the candidate hit by the arrow threshold of Proposer.

var proposer = candidates.last
var cumulativeStakeAmount:Long = 0
for(c <- candidates){
  if(cumulativeStakes <= threshold && threshold < cumulativeStakes + c.stake){
    proposer = c
    break
  }
  cumulativeStakes += c.stake
}

This is a common way of random sampling according to a categorical distribution by using a uniform random number. Similar to throwing an arrow on a spinning darts whose width is proportional to the probability of each item.

Untitled (1)

Selecting of a Consensus Group

By applying the above, we can select a consensus group consisting of one Proposer and V Validators. This is equivalent to performing V+1 categorical trials, which is the same as a random sampling model with a multinomial distribution. It's possible to illustrate this notion using a multinomial distribution demo I created in the past. This is equivalent to a model that selects a Proposer and Validators when K is the number of candidates and n=V+1.

As an example of intuitive code, I expand categorical sampling to multinomial.

val thresholds = new Array[Long](V + 1)
for(i <- 0 until thresholds.length){
  thresholds(i) = (rand.nextDouble() * S).toLong
}

var cumulativeStakes:Long = 0
val winner = new Array(thresholds.length)
for(c <- candidates){
  for((t, i) <- thresholds.zipWithIndex){
    if(cumulativeStakes <= t && t < cumulativeStakes + c.stake){
     winner(i) = c
    }
  }
  cumulativeStakes += c.stake
}
val proposer = winner(0)
val validator1 = winner(1)
...

In the above steps, a single candidate may assume multiple roles. If you want to exclude such a case, you can remove the winning candidate from the candidates. In this case, the thresholds must be recalculated because the total S of the population changes.

Computational Complexity

  1. typical sort algorithm: O(N log N), where N is the number of all candidates.
  2. generating random numbers: O(V + 1).
  3. winner extraction: O(N × (V+1)), is the worst case. In many cases, loops can be interrupted.

The computational complexity is mainly affected by the number of candidates N. There is room for improvement by remembering the list of candidates that have been sorted by the stake.

Make it possible to call a VRF API implemented in Algorand's libsodium

Use libsodium (used by Algorand) for VRF. It's an extension of the original security library jedisc1/libsodium to include VRF functionality.

The libsodium is implemented in C and should be called via cgo from Tendermint.

The header file sodium.h is required to use constants and functions in libsodium from Golang. Therefore, I'll place github.com/algorithm/libsodium on crypto/vrf/internal/vrf/libsodium
(with git submodule), according to other packages, such as crypto/secp256k, which is part of Tendermint.

Additionally, we must also compile and install the library before we use it.

Add code coverage to CI

I commented out the code coverage test in CircleCI in the last commit (ref #10).
But it is necessary to know which code is tested or not throght this test

So we need to add code coverage test again.

Regulation on successive winnings

Purpose

The overarching aim is to regulate the successive winnings of nodes with high Voting Power.

Retain the characteristic of electing Proposer with a frequency proportional (may as logarithmic) to the amount of stake in the long run, but make successive winning and short-term election predictions more difficult.

Problem Definition

We introduced randomness into Tendermint's Proposer selection by VRF. However, in the absence of stake variation, it's more likely that nodes assigned high stakes will be consecutively elected as Proposer, and it's still possible to predict with some certainty, albeit probabilistically, which nodes will be Proposer in future rounds.

Possible solutions to this would be considered such as to exclude the nodes selected by Proposer in several future rounds or to reduce their winning rate. However, such heuristic methods are probably less logical robustness and fairness due to the intervention of human subjectivity.

Proposal

In this proposal, I propose to use a deterministic weight such as the Priority Queue or Weighted Round-Robin (WRR) algorithm as a winning rate. In particular, the value called ProposerPriority, which is currently used in Tendermint's WRR PoS, and progresses with each round, may be able to use as a value proportional to the winning rate for the round.

In the example of the ProposerPriority of the Tendemint's WRR, it's expected to be a quite low probability that a node that is a Proposer in a given round R will be selected as a Proposer again at R+1 is quite low. In addition, it's very difficult to predict the probability of Proposer selection in an R+5 round by random Proposer selection by VRF.

It should actually use this method to see if it's linear or log-proportional to Voting Power (the amount of stake) over time.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Modify License document

I have consulted with the patent team, and here is a summary and the details.

Summary

  • Original file - keep the original copyright notice
    • but lots of files don't have a copyright header.
  • Modified file - add modification info
    • If there's no copyright header when you modify it, please add it.
Modified work Copyright {year} LINE Corp. 
Original work Copyright 2016 All in Bits, Inc
        
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
        
    http://www.apache.org/licenses/LICENSE-2.0
        
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

(I'm still not sure about the copyrighted year.(Copyright 2016 All in Bits, Inc) I'll get back to you after having more investigations.)

  • New file - add LINE's copyright header
Copyright {year} LINE Corp. 
    
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
    
    http://www.apache.org/licenses/LICENSE-2.0
    
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Details

Licensing of line/tendermint

Details of Apache license v2.0

in a perspective of the original tendermint, (because we should comply with what original author licensed)

  • Based on section 1. Definition,
    • the original tendermint is a Work.
    • the new tendermint is a Derivative Works.
  • Based on the section 4. Redistribution,
    • open-sourcing the new tendermint is a redistribution since LINE will distribute copies of the Work and the Derivative Work.
    • We must do the followings when we publish this repo.
      • include a copy of the License; and
      • state that we changed the files;
        • the thing is there's no specific way how to write the notice.
        • Although writing original copyright and modified copyright together is not a common way in the intellectual property world, it's commonly used in the open source world.
        • retain the same rights that we've got from the original work. (such as copyright, patent, ...)
    • It says "You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions .... of Your modifications".
      • Therefore we'll make the new tendermint Apache-licensed by LINE Corp.

Please feel free to let me know if you have any other ideas.

Originally posted by @syleeeee in #14 (comment)

Add functions to generate VRF Proof to PrivValidator-derived classes

To generate a VRF Proof, it needs to implement an additional function in a subclass derived from the PrivValidator interface that holds a private key to generate the signature.

The target subclasses derived from the PrivValidator are:

  1. FilePV has a key pair mapped to a local file and uses it to generate signatures.
  2. MockPV generates and keeps a key pair on the memory and uses it to generate signatures for tests.
  3. SignerRemote provides RPC functions to delegate signature to PrivValidator of the remote process.
  4. SignerValidatorEndpoint uses Mutex for each SignerRemote method call to acquire a monitor and synchronize it.

This issue is to add the GenerateVRFProof() function to these classes.

Make VRF Interface and introduce additional library

Background

Currently, we use Algorand's libsodium as the VRF library of LINE Blockchain. This is based on the next perspectives:

  1. Support for ed25519 (ECVRF-ED25519-*).
  2. Has a clear, free of serious infection OSS license.
  3. Be efficient (but PoC shows that golang is enough performance).
  4. Have been used in Algorand's production.

However, libsodium is implemented in C and requires special build procedures. Currently, it often causes build error problems when building new local or CI environments. The solution to this problem consumes a lot of time than we thought.

As you can see from the fact that the Tendermint repository contains C libraries other than libsodium, this is a problem that can eventually be solved. Given the current phases and resources, however, we shouldn't spend too much time analyzing and resolving this cause of this problem. For this reason, we'll make available alternative use of VRFs that don't cause build errors, which can be used as a substitute for libsodium in current development.

Note that this is a temporary workaround and assumes that we will eventually use libsodium or similar suitable libraries.

Goal

The goal of this work is to isolate the VRF implementation with an abstracted interface and to eliminate build errors in CI environments while providing the necessary VRF capabilities for general election behavior. The requirements with the alternative functions are as follows:

  1. Support for ed25519 (ECVRF-ED25519-*).
  2. Has a clear, free of serious infection OSS license.
  3. Has minimum functions for use as a VRF (disregardable the security or level of perfection).
  4. Successful build on local or CI environments.

It's better to be available in go get as an external library, but it's also able to be placed directly in our repository. In this case, since the directory structure is as follows (according to crypto/secp256k1), new resources may be placed in the crypto/vrf/internal/vrf/[something] directory.

スクリーンショット 2020-03-17 18 05 09

There may also be work to build a CI for this additional library.

License Declaration Comment

What license declaration should be added to the Go source files we created and add to Tendermint? For reference, the sources of Tendermint currently have no comment on the license.

A suggestion in Issue #12.

Investigate Ed25519 Key Format Compatibility between Tendermint and libsodium

Tendermint and libsodium generate different keys for 32-bytes fixed zeros seed.

tendermint: private key: 66687aadf862bd776c8fc18b8e9f8e20089714856ee233b3902a591d0d5f2925b1c4df1c17cce90a03cd4c057fc74d4e2ee24ddfe2a8c9c5fd8d0a45a1f082f3 (64 bytes)
libsodium : private key: 00000000000000000000000000000000000000000000000000000000000000003b6a27bcceb6a42d62a3a8d02a6f0d73653215771de243a63ac048a18b59da29 (64 bytes)
tendermint: public key: b1c4df1c17cce90a03cd4c057fc74d4e2ee24ddfe2a8c9c5fd8d0a45a1f082f3 (32 bytes)
libsodium : public key: 3b6a27bcceb6a42d62a3a8d02a6f0d73653215771de243a63ac048a18b59da29 (32 bytes)

Ed25519 is specified in RFC 8032, and its parameters are considered to be compatible. We need to convert them if we use the ED25519 key-pair of Tendermint. If it's not possible, another key must be used for the leader election.

Proposal: Fair election and reward for validators

Replaces #24 and #25

Introduction

A review of #24 raised the question of using the random election method, in which a validator elected more than the amount of staking. For example, we cannot prevent a situation in which a validator wins more than a majority of votes in a round. However, I found that invalidating elected votes over the amount of staking would distort statistics.

This proposal presents a way to solve all issues 24 and 25 using the random election.

The following are the problems we need to solve.

  • If the network has more than the maximum number of validators it can afford, select a group of validators in a reasonable manner
  • There is a probability of winning in proportion to the amount of staking
  • Elected validator must have the voting power in proportion to the amount of staking
  • No validator shall have more voting power than staking in every round

Definition

  • max_active_validators: Maximum number of validators the network can handle. I set it to 100 in the experiment.
  • elected votes: the number of votes a candidate has won in one round. Minimum 0 to maximum no limit. It is proportional to the amount of staking.

Fair election by carrying the voter-election of a validator to the next round

To describe the algorithm, define the following data structures and function prototypes:

var TotalStaking // sum of all Staking of Nodes

type Node struct {
	Id uint32
	Staking uint64
}

type Nodes []*Node

type Validator struct {
	Node        *Node
	Elected     float64 // it's not the number of elected votes but the percentage of votes in the whole
	Carried     float64
}

type Validators []*Validator

func elect(nodes Nodes, dot uint64) *Node { // random election that elects one validator
	begin := uint64(0)
	for _, node := range nodes {
		if dot >= begin && dot < begin + node.Staking {
			return node
		}
		begin += node.Staking
	}
	panic("not possible")
	return nil
}

func electValidator(nodes Nodes, validatorCount int, previous Validators) Validators {
	var totalElection // Continue adding 1 to select validator as much as the ValidatorCount.
	electedCountMap := map[*Node]int{} // elected count of a node
	prevRate := sum_of_carried(previous)
...
}

The electValidator() is a function of selecting validators from a given candidates(nodes).

We should call v := electValidator(nodes, n, nil) in first round. And from the next round, we call v = electValidator(nodes, n, v).

Validator.Elected is the elected votes. And 0 <= Validator.Elected <= Validator.Node.Staking / TotalStaking

We get a value of 1 when we add the Elected of all Validators in a round.

The key point is using Validator.Carried. Through elect() if a candidate's Elected exceeds the amount of staking(Validator.Node.Staking/TotalStaking), Elected will be cut and passed to Carried.

v.Elected = v.Carried + (float64(electedCountMap[v.Node])/float64(totalElection))*(1.0-prevRate) max := float64(v.Node.Staking) / float64(totalStaking) if v.Elected > max { v.Carried = v.Elected - max v.Elected = max } else { v.Carried = 0 }

Statistics result

electing 10 validators among 100 nodes

id=25, staking=95942, elected=0.009675, carried=0.071559
id=10, staking=100304, elected=0.010115, carried=0.070680
id=30, staking=100476, elected=0.010132, carried=0.070645
id=13, staking=97614, elected=0.009844, carried=0.071222
id=47, staking=47821, elected=0.004822, carried=0.081264
id=46, staking=47536, elected=0.004794, carried=0.081322
id=66, staking=53969, elected=0.005442, carried=0.080024
id=20, staking=94064, elected=0.009486, carried=0.071938
id=32, staking=93167, elected=0.009395, carried=0.163028
id=0, staking=408372, elected=0.041181, carried=0.008548
total staking=9916578, elected staking=1139265, totalElected=0.114885, totalCarried=0.770230

electing 10 validators among 11 nodes

id=4, staking=422112, elected=0.042670, carried=0.081618
id=10, staking=429874, elected=0.041429, carried=0.000000
id=9, staking=411507, elected=0.041598, carried=0.082690
id=5, staking=444584, elected=0.041429, carried=0.000000
id=0, staking=4000000, elected=0.259944, carried=0.000000
id=2, staking=1032912, elected=0.104413, carried=0.014033
id=8, staking=394026, elected=0.039830, carried=0.001768
id=6, staking=391340, elected=0.039559, carried=0.001870
id=1, staking=1088101, elected=0.082859, carried=0.000000
id=3, staking=878987, elected=0.088853, carried=0.035435
total staking=9892573, elected staking=9493443, totalElected=0.782584, totalCarried=0.217416

accumulated voting power of all nodes for 1,000,000 rounds when electing 30 validators among 100 nodes

node[0] staking = 408372, votingPower = 111123554156.101898(2.7211)
node[1] staking = 435240, votingPower = 118234588705.900314(2.7165)
node[2] staking = 413164, votingPower = 112547081783.588745(2.7240)
node[3] staking = 395018, votingPower = 107308301730.356918(2.7165)
node[4] staking = 393971, votingPower = 107094846362.584427(2.7183)
node[5] staking = 414945, votingPower = 113149353319.100266(2.7269)
node[6] staking = 365251, votingPower = 99282478857.849655(2.7182)
node[7] staking = 372522, votingPower = 101326818927.546143(2.7200)
node[8] staking = 367758, votingPower = 100252350995.846466(2.7260)
node[9] staking = 384073, votingPower = 104119087313.889755(2.7109)
node[10] staking = 100304, votingPower = 27512408940.416477(2.7429)
node[11] staking = 106272, votingPower = 29059940225.238789(2.7345)
node[12] staking = 94286, votingPower = 25871487525.402443(2.7439)
node[13] staking = 97614, votingPower = 26741361343.442368(2.7395)
node[14] staking = 96362, votingPower = 26210969321.485191(2.7201)
node[15] staking = 99378, votingPower = 27184398823.967289(2.7355)
node[16] staking = 95661, votingPower = 26188094811.048931(2.7376)
node[17] staking = 95863, votingPower = 26270755751.960640(2.7404)
node[18] staking = 103581, votingPower = 28251721286.542542(2.7275)
node[19] staking = 94372, votingPower = 25769909245.406086(2.7307)
node[20] staking = 94064, votingPower = 25548739778.887306(2.7161)
node[21] staking = 97218, votingPower = 26499066066.399834(2.7257)
node[22] staking = 101413, votingPower = 27642677319.147469(2.7258)
node[23] staking = 107249, votingPower = 29257543940.726444(2.7280)
node[24] staking = 95863, votingPower = 26311899166.714108(2.7447)
node[25] staking = 95942, votingPower = 26108980458.034668(2.7213)
node[26] staking = 105051, votingPower = 28562077666.086296(2.7189)
node[27] staking = 94132, votingPower = 25711807530.496296(2.7315)
node[28] staking = 107306, votingPower = 29303814487.092369(2.7309)
node[29] staking = 103934, votingPower = 28290122569.033993(2.7219)
node[30] staking = 100476, votingPower = 27673761486.765274(2.7543)
node[31] staking = 90567, votingPower = 24862633822.944218(2.7452)
node[32] staking = 93167, votingPower = 25622110387.559948(2.7501)
node[33] staking = 102145, votingPower = 28079519408.795990(2.7490)
node[34] staking = 109504, votingPower = 29942135959.942913(2.7343)
node[35] staking = 91590, votingPower = 25088807736.305534(2.7393)
node[36] staking = 101896, votingPower = 27937605375.382408(2.7418)
node[37] staking = 91183, votingPower = 24760313586.877224(2.7155)
node[38] staking = 103840, votingPower = 28511400082.061153(2.7457)
node[39] staking = 96031, votingPower = 26284226577.425892(2.7371)
node[40] staking = 46733, votingPower = 12806379836.199991(2.7403)
node[41] staking = 50410, votingPower = 13798515690.462137(2.7373)
node[42] staking = 50441, votingPower = 13796800932.148359(2.7352)
node[43] staking = 47786, votingPower = 13173240101.740839(2.7567)
node[44] staking = 49232, votingPower = 13377496447.539177(2.7172)
node[45] staking = 50305, votingPower = 13608706466.942684(2.7052)
node[46] staking = 47536, votingPower = 12949050426.846352(2.7241)
node[47] staking = 47821, votingPower = 13188533576.071650(2.7579)
node[48] staking = 52886, votingPower = 14437386577.879257(2.7299)
node[49] staking = 48619, votingPower = 13437311624.382286(2.7638)
node[50] staking = 53805, votingPower = 14805697227.890570(2.7517)
node[51] staking = 47972, votingPower = 13089744268.731922(2.7286)
node[52] staking = 53943, votingPower = 14615731750.071083(2.7095)
node[53] staking = 45975, votingPower = 12668678773.682785(2.7556)
node[54] staking = 54769, votingPower = 15016213669.629269(2.7417)
node[55] staking = 45743, votingPower = 12521327839.796650(2.7373)
node[56] staking = 47223, votingPower = 12895991841.298204(2.7309)
node[57] staking = 51810, votingPower = 14174231700.029474(2.7358)
node[58] staking = 47416, votingPower = 13061488970.907366(2.7547)
node[59] staking = 48116, votingPower = 13088134615.763790(2.7201)
node[60] staking = 54328, votingPower = 14898308276.512819(2.7423)
node[61] staking = 52418, votingPower = 14364708223.388477(2.7404)
node[62] staking = 53010, votingPower = 14612141371.894453(2.7565)
node[63] staking = 52302, votingPower = 14426239972.249813(2.7583)
node[64] staking = 46830, votingPower = 12781778163.594860(2.7294)
node[65] staking = 49284, votingPower = 13493448233.022907(2.7379)
node[66] staking = 53969, votingPower = 14850532396.341614(2.7517)
node[67] staking = 51826, votingPower = 14303135004.404293(2.7598)
node[68] staking = 54789, votingPower = 14979885931.599506(2.7341)
node[69] staking = 54222, votingPower = 14914396288.424328(2.7506)
node[70] staking = 45909, votingPower = 12501315355.907877(2.7231)
node[71] staking = 49932, votingPower = 13695110831.345026(2.7428)
node[72] staking = 54269, votingPower = 14870572932.409517(2.7402)
node[73] staking = 54549, votingPower = 14990569204.222548(2.7481)
node[74] staking = 48480, votingPower = 13201018407.023676(2.7230)
node[75] staking = 51908, votingPower = 14274401189.762775(2.7499)
node[76] staking = 52109, votingPower = 14469194836.308517(2.7767)
node[77] staking = 50637, votingPower = 13872440478.340454(2.7396)
node[78] staking = 51494, votingPower = 14165622589.564526(2.7509)
node[79] staking = 50517, votingPower = 13759757804.583889(2.7238)
node[80] staking = 52558, votingPower = 14380310615.949842(2.7361)
node[81] staking = 49039, votingPower = 13430194912.289827(2.7387)
node[82] staking = 46307, votingPower = 12704232707.909594(2.7435)
node[83] staking = 54859, votingPower = 15058928078.915295(2.7450)
node[84] staking = 53963, votingPower = 14632314335.814215(2.7115)
node[85] staking = 48221, votingPower = 13273131376.963167(2.7526)
node[86] staking = 52211, votingPower = 14259812148.161016(2.7312)
node[87] staking = 51445, votingPower = 14117585386.827826(2.7442)
node[88] staking = 45856, votingPower = 12551368326.759348(2.7371)
node[89] staking = 51695, votingPower = 14054928923.276094(2.7188)
node[90] staking = 51227, votingPower = 14028647489.898046(2.7385)
node[91] staking = 48697, votingPower = 13182353312.170832(2.7070)
node[92] staking = 47369, votingPower = 12947299549.342567(2.7333)
node[93] staking = 50352, votingPower = 13922859447.717659(2.7651)
node[94] staking = 46873, votingPower = 12902507433.241976(2.7527)
node[95] staking = 47389, votingPower = 13102163617.910748(2.7648)
node[96] staking = 51280, votingPower = 14016123781.801979(2.7333)
node[97] staking = 46268, votingPower = 12663658375.393581(2.7370)
node[98] staking = 47814, votingPower = 12970151286.495846(2.7126)
node[99] staking = 35254, votingPower = 9702744736.722759(2.7522)

the number of ( ) is votingPowerOfNode/100000.0/Node.Staking. We can see that this value converges roughly into one value.

Implement VRF+BFT Consensus Algorithm

Description

Replaces the Tendermint BFT with the VRF BFT algorithm. Tendermint's implementation of PoS only uses Proposer and Validators as election probabilities.

Stories

The details of each item should be written in their respective story or task tickets. This ticket is just a summary and its pointer. Please add stories and tasks as needed.

Design Policy

The proposals outlined below are specific to LINK Network v2 and ultimately any differences from the original Tendermint should be compiled into a public Wiki page.

  • Election:
    • #22 Categorical distribution and #23 Binomial distribution have been proposed for random selection using VRF.
    • #30: For performance, stake conformance, and Byzantine tolerance, we decide to adopt the former.
  • Reward: #28, #35
    • It's better to take the lead on the Korean side as it's a decision related to expanding the scope of usage of our blockchain.
  • Violation of Rules: #39
    • The list of cases, and their penalties.
  • Recovery:
    • #31: BFT-assumption violation is a common problem of PoS nature. We accept the Byzantine agreement and recover by VRF.
  • Conformity:
    • Benchmark (processing time, memory usage) #30
    • Legitimacy of reward (t-test to verify the voting power whether it's proportional to the amount of stake) #30
    • BFT-assumption violation simulation (liveness and soundness of consensus) #30
  • Genesis Round:
    • How to determine the first Proposer and Validators.
  • Codename:
    • Nicknames that are easier to call and remain impressive (hopefully reflect key algorithm, such as "Raft").

Implementation

The VRF election may be divided into the following several points.

  • Basic functionality: in crypto/vrf
    • #7: Introduce VRF capability into Tendermint. done #1 #2.
    • #32: Prepare election (random sampling) function.
    • #40: Make VRF interface.
  • Proposer Election: in PrivValidators, #3
    • #7: generate the next VRF hash and proof and put on the block.
    • #6: replace the current round-robin election with one based on VRF.
  • ValidatorSet Election: #33
    • In addition to the Proposer in the initial release, should a ValidatorSet be elected by VRF. Review requirements.
    • If we do this, investigate whether it's possible to modify the ValidatorSet selection on the Cosmos side.
  • Proposer/Validators Verification: #34
    • Replace the block verifying to the VRF version.
  • Rewarding:
    • It may need to do something with Cosmos SDK dealing with Stake.

Build CI environments

Test

  • Functional test
    • n-th Proposer does select the expected (n+1)-th Proposer and Validators.
    • the selected Proposer and Validators do play their respective role.
    • block generation does continue without a stall.
  • Crash fault and recovery test
    • the system does recover normally in case the selected Proposer doesn't respond.
    • the system does work normally in case the number of selected non-failure Validators equals the quorum.
    • block generation for the round fails and the system does recover normally in case the number of selected non-failure Validators less than the quorum.
  • Byzantine fault and recovery test
    • the Validators or P2P network do detect and recover correctly in case an unexpected Proposer worked.
    • the P2P network does detect and recover correctly in case an unexpected Validator worked.
  • Performance test (practical evaluation)
    • compare with Tendermint's PoS and VRF consensus (it probably be slow due to increased computational complexity).
    • measure the performance against the number of Candidates (Nodes) and Validators.

Public Document

  • New block format
  • Concrete sequence chart or flow chart
  • Fault recovery procedures

Electing one producer and validator to use a binomial distribution

I propose a method of selecting producers and validators through a binomial distribution.
Each user has a voting power equal to his staking value, and is elected as a produer and validator for that voting power and is rewarded. In addition, producers who seek and prove users who have cheated will be rewarded additionally, and those who find cheating will be punished.

Election logic

binomial_distribution01
The above formula calculates how many times the user_hash, which hashes the seed and the user's public address, is elected to the binomial distribution when it has power of k at the total power w.

Seed

Each block stores a VRF seed and proof value. With this seed, we select the producer and validator group to create the next block.
The seed of the next block is generated as a VRF hash with the seed of the previous block and the private key of the producer.

seed_n+1 = VRFpk(seed_n)

Elected Producer

How to elect Producer

The producer hashes the seed of the previous block with the public address of each user.
Hash uses a way that each value can be made random. (E.g. sha512_256 or xorshift, etc)

  • The hash of the previous block's seed plus each user's public address is used as a hash.
  • Calculate the probability of selecting a producer with a hash of each user by the election logic of binomial distribution formula. (The number of times to be elected as 0,1,2, ... are returned.)
  • If the probability of selection is 1 or more, the user's hash and the (0… n) value are summed up to get the hash again.
    • for (i=0; i<n; i++) { user_hash_i = hashing(user_hash || i) }
  • Select the producer with the largest hash among all hashes obtained from all users.
highestHash = 0
producer = nil
for (user <- all validators) {
	hash = hashFunc(seed + publicAddress)
	k = electFunc(votingPower, totalPower, expectedSize, hash)
	for (idx <- k) {
		subHash = hashFunc(hash + idx)
		if highestHash < subHash {
			highestHash = subHash
			producer = user
		}
	}
}

In this case, the producer must use expectedSize to allow more than one person to be elected.

For example, when validators toss each coin they own, they choose the producer who owns the coin with the highest issue year (or issue number) of all the coins on the face.
coinflipping

Elected validators

How to elect validator set

Validtaor is also elected in the same way as a producer is elected.

The difference is that a producer chooses only one user, but when a validator is elected by the election logic, it is determined to be an elected validator.

  • The hashed value of the previous block's seed and each user's public address is used as the user's hash. (Use seed + “validator” + public_address to distinguish it from the producer.)
  • Calculate the probability of selecting a producer with a binary hash of each user.
  • If the probability of selection is 1 or more, select as a validator.
var validatorSet
for (user <- all validators) {
	hash = hashFunc(seed + “validator” + publicAddress)
	k = electFunc(votingPower, totalPower, expectedSize, hash)
	if (k > 0) {
		validatorSet.add(user)
	}
}

What is the appropriate number of validators?

However, when a collection of users selected as validators is called a validator group, an appropriate number of validators should be selected on the assumption that there is a byzantine.

If a group of honest users is called g (good) and a group of malicious users is called b (bad)

g + b = 1

The malicious group should be less than 1/3 to create a block with more than 2/3 agreement.

g > 2b
b < g/2

Then, if the group of users to be voted is t, this group should be a number where b is not two-thirds of the vote, unless at least the block is not created by a malicious group.

t >= b + g/4

If all of the selected groups are honest users, the minimum consensus can be seen.

t < g

Therefore, the minimum voting power required to create a block can proceed without difficulty at least 67% of the total voting power, and even if there are many malicious users at least 50%, the bad block can be prevented from being generated.
That is, the validator group needs to secure voting power of 50% or more of the total voting power.

  • Minimum voiting power to generate blocks without problem: 67% or more
  • Minimum voting power to prevent malicious block generation: 50% or more

(The expected size is to be obtained by simulation.)

Relationship between producer and validator

Producers and validators are different users because they are selected in different ways. However, a producer can also be elected as a validator by a validator's election formula.
If a producer is also elected as a validator, the reward can also receive both a reward as a producer and a reward as a validator.

Prepare a multinode test environment locally

Summary

Original Tendermint has local multinode test environment, make localnode-start. But this doesn't suit us. So I'll make some modifications to the original method.

Problem Definition

What is need more.

  • add debug logger of consensus or other develop part. It should be modified easily
  • The number of nodes should be changeable.

What to use as a seed for VRF random number generation?

We assumed the VRF seed as a volatile value such as a block hash, but in more detail, there are several options:

  • Height, Time, TotalTxs
  • LastCommitHash // commit from validators from the last block
  • DataHash // transactions hashes from the app output from the prev block
  • ConsensusHash // consensus params for current block
  • AppHash // state after txs from the previous block
  • LastResultsHash // root hash of all results from the txs from the previous block

It seems to be possible to use fields from one or more previous blocks instead of the current one (I don't know exactly what all the fields mean). Or, we may also use a combination of them such a sha256(LastResulstHash || Height). What is the best way for VRF seed?

  • Security guarantee in the cryptographic context.
  • A method that cannot be intentionally operated by an adversary.

Fix the skipped unittests after adding Random Electronics

Fix the skipped unittests after adding Random Electronics

  • consensus/replay_test.go
    • TestWALCrash
    • TestSimulateValidatorsChange
    • TestHandshakeReplayAll
    • TestHandshakeReplaySome
    • TestHandshakeReplayOne
    • TestHandshakeReplayNone
    • TestMockProxyApp
  • consensus/state_test.go
    • TestStateLockPOLRelock
    • TestStateLockPOLUnlock
    • TestStateLockPOLSafety1
    • TestStateLockPOLSafety2
    • TestProposeValidBlock
    • TestSetValidBlockOnDelayedPrevote
    • TestSetValidBlockOnDelayedProposal
    • TestStartNextHeightCorrectly
    • TestResetTimeoutPrecommitUponNewHeight
    • TestStateHalt1
  • types/validator_set_test.go
    • TestAveragingInIncrementProposerPriority
    • TestValSetUpdatesOrderIndependenceTestsExecute

Proposal: Deciding max active validators

Max active validators

We are currently agonizing over the algorithm to select a fixed number of validators from among candidates. (#22, #23)
Restricting the maximum validators ensures some time period of network consensus.
However, if there are not many participants in the initial days of the network, it is better to have all of the staking nodes participate as validators without having to select a validator from the candidate.
So we need some property that is a kind of threshold activating election of validators among candidates.

  • num of validators < threshold: all validators are activated in all round
  • threshold <= num of validators: elect max_active_validators among the candidates

We need to be able to switch these two policies on-chain, so we need to have a property in the system so that we can dynamically switch the policies according to the number of validation participants.

I call this property max_active_validators.
(It is different from max_validators of Cosmos or Tendermint)

Minimum staking to be a validator

If the minimum staking is too low, so many validator nodes can be active in the network.
Under these circumstances, if the number of validators is limited, the total voting power of the validators to be active in a round may be too low.
For example, if a minimum of 100 links is specified, 1000 nodes can participate as a validator, of which the maximum validator is limited to 100, the total voting power is only 10,000 links.
So, what should be the minimum staking?

We need some assumption to set this value. That is, the assumption is that at least some of the voting power of the entire validator should be used for a round of validation activities.
According to Hong-seop's opinion, one-half of the total voting power must participate in the actual validation process to prevent malicious blocks from being created.

From this assumption, following formula is obtained.

  • minimum staking to be a validator = total staking / (2*max_active_validators)

For dynamic minimum staking

Total staking can be variable for every height. The operation of the validator nodes becomes tricky if the minimum staking is changed per every block. Therefore, the minimum staking should be fixed to some period. Therefore, we have no choice but to make a certain assumption of total staking.

But what happens when the number of participants continues to grow far beyond the total voting power we expect? This may result in less than half the voting power used for validation by the maximum active validator limit.

  • expected total staking: 1,000,000
  • minimum staking: 50,000
  • max active validators: 100
  • if the current total staking is under 1,000,000: we can elect all participates as validators. In this case, the active voting power is over 50,000 * 100
  • if the current total staking is under 2,000,000 and over 1,000,000: the number of validator nodes can be over 100. so we should elect 100 validators because of max_active_validators
  • if the current total staking is over 2,000,000: the active voting power will fall to less than one-half of the total => We need to adjust some property.

So I propose the algorithm adjusting the property as following

var (
	max_active_validators = fromProperty(...) // ex) 100
	minimum_total_staking = fromProperty(...) // ex) 1000000
	expected_total_staking = minimum_total_staking
)

actual_total_staking := sum(...)
for ;actual_total_staking > 2 * expected_total_staking; {
	expected_total_staking *= 2
}
for ;actual_total_staking < expected_total_staking / 2; {
	if expected_total_staking == minimum_total_staking {
		break
	}
	expected_total_staking /= 2
}
minimum_staking := expected_total_staking / (2 * max_active_validators)

Issues to be solved for the introduction of BLS

Summary

One important thing is that the introduction of BLS signature aggregation results in a mix of ed25519 keys for the VRF and BLS keys for the signature on the system.

  • Generating, storing, and distributing BLS key-pair.
    • Can we generate BLS key-pair when creating an ed25519 key with tendermint command?
    • Is there a more appropriate, standardized format such as PKCS#12?
    • Is the distribution way of public key possible to be the same as the current tendermint?
  • Using BLS private key for signature and aggregation.
    • When will we aggregate the BLS signatures?
      • It's able to aggregate when generating a Commit for a block.
      • It currently has the signer's address and signature for each CommitSig. It's probably possible to move the aggregated signature to Commit.
      • However, since CommitSig seems to also be used to represent a signature to a Vote, it may not be possible to simply remove it from a class field.

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Lock contention for access to consensus state between rpc request and tx processing occurs

Summary

RPC requests require read-lock for consensus state and tx-processing requires write-lock while it apply txs to consensus state so both of two trigger lock contention.

Problem Definition

For example, "localhost:26657/status" rpc require consensusState.mtx.RLock() while it calls GetLastHeight() at https://github.com/line/tendermint/blob/404d27a892a54b8d94732b0e143f0f69b4dd8382/consensus/state.go#L220

And consensus logic can require this lock in here(https://github.com/line/tendermint/blob/404d27a892a54b8d94732b0e143f0f69b4dd8382/consensus/state.go#L674) and handleMsg(mi msgInfo) may call enterCommit() which may take long time to process several thousands txs in a block.

This situation raise two problems,

  • rpc call can be hang while long txs processing
  • tx processing can be delay for a short time by rpc calls

Proposal

When rpc requests read the consensus state, verify whether it is possible to read without a lock and modify it if possible to read without a lock.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Setup custom CI

In order to setup our CI as using default tendermint ci, we check next things.

  • analyze CircleCI configure.
  • modify configure to fit our setting.

Design for Various Rule Violation

Expected results as this conclusion:

  1. List and classify of violations of the rule: What are the possible violations (inside out, rules that must be followed) for the Byzantine nodes, or nodes that aren't malicious but not to provide the required services?
  2. Policies for each violation: How to deal with violators of the rules (although penalties are generally used). Problems that can be detected and recovered autonomously can be incorporated into the consensus mechanism. In other cases, we should discuss and determine how to recover to a soundness state.

Verification failed despite correct proof and message

Situation: To create a proof for a message using VRF prove() and when it's used the verify() with the same message, it could fail.

Caused by: a bug in passing the message from Go to C as a byte array. Specifically, it must pass an address of the array (so-called pointer or reference), but it was passed not array address but Go slice address as follows:

func Prove(privateKey *[SECRETKEYBYTES]byte, message []byte) (*[PROOFBYTES]byte, error) {
  ...
  messagePtr := (*C.uchar)(unsafe.Pointer(&message))
  if C.crypto_vrf_prove(proofPtr, privateKeyPtr, messagePtr, messageLen) != 0 {...}
}

The &message specifies not array address but slice address.

messagePrt := (*C.uchar)(unsafe.Pointer(&message[0]))
                         // or C.NULL if len(message) == 0

When proof() and verify() were using exactly the same instance of the message []byte slice, verification seemed to succeed because the contents pointed to by both addresses were equal. However, in case that the slices had the same data but different instances, it failed because the data pointed to by the slice addresses where different. This was essentially a memory access violation and could have caused a SIGSEGV.

Fixed: in PR #12 . This issue has been created to share the issue.

[Proposal] Reward allocation design according to Stake ratio

TL;DR

Reward allocation algorithm in Random Sampling without Replacement (non-duplicate elections) with a compensatory model of opportunity loss.

  • When we elect n Voters from N Candidates without duplication, the problem is that the Voter who is selected in the k-th selection loses the opportunity to earn the rewards for the rest n-k times selection.
  • Let α to be the base reward for winning of one selection, s[I] to be the amount of Candidate i’s stake, S to be the total amount of stake for the set of Candidates, σ[r] be the total stake of the Candidates who were selected before the r-th selection. Then the i’s expected reward in r-th selection is αe[i,r]=α s[I] / (Sσ[r]).
  • Add this expected reward as compensation for the lost opportunity n-k selection to the reward for Candidate who was elected in the k-th selection, in addition to the base reward for winning.
  • This is, the k-th selected Candidate in the selection of n Voters eventually acquires the next reward:
    expected reward

This means that the reward up to the k-th selection are obtained by random simulations and the rewards that fail to participate in selections after the k-th selection are obtained in terms of probabilistic expectations.

Problem Setting

What we want to do: Creates a randomly selected subset from a set. In this case, the elements of the set have weights that represent the probability of being selected. Candidates who are selected will be rewarded. we want this reward to be proportional to their weight.

On this page, I'm considering a way to select exactly a given number n of elements (Voters) from a set (Candidates) with a probability corresponding to their weight (Stakes). And we also want the reward of each Candidate to be exactly proportional to its weight.

  1. Sampling with replacement: An intuitive way to select exactly n elements randomly from a set of N is to repeat random sampling from the set until the selected subset becomes n. However, this way isn't guaranteed to finish in constant time because the same element may be repeatedly selected. This becomes a problem of liveness in distributed systems.
  2. Sampling without replacement: It's possible to prevent selecting the same element repeatedly by excluding the selected element from the set. However, as the number of elements in the set decreases, the probability of winning the remaining elements increases. For example, in a case with a biased probability of winning, the element with a higher probability of winning is selected earlier, so the one that has a relatively lower probability of winning tends to be higher than the actual bias. This is a problem of fairness.

The purpose of this page is to lead a formula that solves the problem of fairness in 2.

The problem in 2 is that the total Stake in Candidate set becomes smaller as the Voter selection progress, so the probability of being elected by a Candidate who is unlikely to be elected, i.e., a Candidate with lower Stake, becomes a higher probability than the Stake it holding. In fact, as shown in Fig 1, in case we repeat the election that elects 25 Voters from a set of 100 Candidates with inverse bias in Stake, if all 25 Voters acquire the same reward in one election, then the Candidate with the higher Stake will acquire a lower rate of the overall reward distribution.


Fig 1. The ratio of Stake that each Candidate holding and the reward that each one earned in non-duplicate case. In the service requirements, the two should be aligned.

Perspectives and Formulas

I assumed that the underlying cause of the reduction of reward for Candidates who are more likely to be selected is the loss of opportunity due to their inability to participate in selections after selected in the round. In other words, adding the number of elections it has already been elected to and couldn't participate in, and the expected amount of its income, to its reward, would seem to be a fair distribution.

More precisely, the increase in the probability of winning is due to a decrease in the total Stake in the Candidate set as the result of the elected Voter drops out of the Candidate. Therefore, the chance of being elected is getting higher in later elections than in the first.

Let α be the base reward and si be the Candidate i's Stake holdings, The i's expectation reward for a single selection can be expressed as follows:

2020 06 22-11_46_18

where S is the total Stake holdings of the candidates participating in that election.

Expected reward in duplicate case

(You can skip this section because this is an explanation as comparative.) I notate S=S=∑isi if all of Candidates participate in the all n selection (i.e., if there is a possibility of multiple winning for a selection) because the S is a fixed value for all selection. Thus, the expected reward for n selections from a Candidate set can be simply added up and expressed as follows:

2020 06 22-11_47_31

Due to the nature of the multinomial distribution, repeating n=1 100 times, repeating n=25 4 times, and doing n=100 once all yield the same expected reward.

Although this case can work simple and fair like Fig.2, it doesn't always result in n Voters being selected n times due to duplicate elections. Thus, the Byzantine assumption may be weakened by a smaller number of Voters than assumed. And due to the possibility of consecutive duplicate selections, it's not possible to guarantee the end time even if the selection is repeated until a certain number of Voters are selected.

5174790066012160
Fig 2. Reward distribution ratio in case of the duplicate winning scheme.

Expected reward in non-duplicate case

By excluding the selected Voters from the Candidates, exactly n Voters can be selected in n elections (if N≥n).

However, as the number of Candidates decreases, the total amount of Stake in Candidate set decreases, and the probability of being selected increases in later elections. This would result that each total reward amount will not proportional to the number of Stake holdings. The trend is that Candidates with more Stake holdings are selected earlier and those with fewer holdings are less likely to be selected so that Candidates with more Stake holdings are accounted for less compensation. This can be seen from the graph in Fig. 1.

There are two reasons why such a difference occurs.

  1. If a Candidate is selected for the k-th selection, it will not receive a reward equivalent to the remaining n−k selections.
  2. As the total Stake holdings S of the Candidate set decreases as the number of Candidates decreases, the expected reward α¯i for Candidate i fluctuates with the progress of the selection.

Introduce the concept of reward expectation

Assume a scratch lottery to win $1M for a particular person. If 100,000 people participate in this lottery, the expected reward per person is $1,000,000÷100,000 = $10 (eliminating the gambling nature as a pastime, it's worth buying the scratch lottery if you can buy it for $9, but it isn't if it's $11).

This also means that if you have the scratch ticket but don't open, you suffer the $10 opportunity loss. So if you couldn't participate in the lottery for some unavoidable reason such that the numbers weren't written of print error, your opportunity loss is $10. However, in a very rough way, it could say that if you get the expected reward of $10 as compensatory of the opportunity loss, the distribution of worth with other participants will be fair.

The point that participating in the lottery itself is worth $10, regardless of whether you win and get $1M or miss out and get $0.

Opportunity loss compensation model

The following example Fig 3 represents an election that selects four Voters, i,j,k,l, out of a set of N Candidate set.

2020 06 22-11_50_05
Fig 3. An election to select four Voters from a set of N Candidates.

  1. In the first selection, the winner i is given a reward α.
  2. In the second selection, the winner j is also given a reward α. There are two important points here.
    1. First, the total amount of Stake S2 of the second selection has decreased by si due to the departure of i in the first selection, and the winning probability px,2 of the remaining Candidates increase.
    2. Second, the first winner i doesn't participate in the second selection. This means that i is losing the opportunity to get a reward of 2nd selection. If i were left in the Candidates, the expected reward for the second selection would have been α¯i,2=αsiS2. In other words, the amount of opportunity loss that i suffered due to not being able to participate in the 2nd selection can be said to be α¯i,2.
  3. Similarly, in the third selection, the total amount of stake in the Candidate set is S3=S1−si−sj, the winner k is also given a reward α, and i and j suffer an opportunity loss in an amount equal to α¯i,3=αsiS3 and α¯j,3=αsjS3, respectively.
  4. It is the same for the 4th selection, but note that the last selected l has all 4 selection opportunities, so it doesn't have any opportunity loss.

This model is based on the hypothesis that we can make it fair by compensating for lost opportunity rewards as expected reward that would have ben probabilistically obtained i it had participated din the selection.

In summary, a Candidate i who is selected for the k-th selection should be rewarded with the following:

2020 06 22-11_51_46

where Sˆr is the total stake of the Candidates selected from r=1 to k−1.

Result

Here, I show the reward distribution by the simulation. The amount of Stake held by each Candidate and the reward for them are expressed as an overall ratio.

6632103138295808
Fig 4.1. Stake distribution with inverse bias.

6221220796956672
Fig 4.2. Stake distribution with linear bias.

5731319647305728
Fig 4.3. Stake distribution with a flat.

6225672463450112
Fig 4.4. A case where Stake is largely ×100 more biased toward one Candidate.

Fig 4.x shows the result of simulation in which only the base reward for winning the election and the opportunity loss is compensated by the expected reward in addition to the base reward in an election in which 25 Voters are selected from 100 Candidates. For 10000 trials, The base reward-only allocation (red points) results in lower distributions for Candidates with high Stake ratios, and higher distributions with relatively low Stake ratios. It can be seen that the allocation compensated by the expected reward behaves according to the Stake ratio.

Note that in the case where Stakes are flat in Fig 4.3, this shows a fair distribution even without correcting. In other words, the unfairness of bias is an inherent problem of weighted random sampling.

5221767277445120
Fig 5.1. The number of Candidates is very small and close to the number of Voters.

Fig 5 shows the case where the number of candidates is small and the number of candidates is close.

Consideration

This simulation uses floating-point arithmetic. When the unit of reward is an integer, it's necessary to ensure that the results are not biased by rounding.
The total amount of reward α×n+δ paid in a single election will vary from election to election. However, if S1 is very large with respect to si, then δ will be a small value.

References

Proposal: Election of proposer and validators using VRF

  • Predefined

    • count of validators: v
    • minimum staking unit: S
    • total staking: p*S
    • staking of a candidate(not validator yet): k*S
  • In N height

    • Proposer(decided at n-1 height) proposes a block having VRF hash
      • vrf_hash = vrf_hash(proposer_private_key, last_commit_hash_of_n-1_block)
    • Validators(decided at n-1 height) verify and vote on the block. And they prove the vrf_hash.
    • Each validator(including proposer) decides validators for next height with vrf_hash
    • The validator has multiple sub-validators as much as the number of k in proportion to there staking value.
      • for one candidate
        • h0 = hash(vrf_hash, my_public_address+0)
        • h1 = hash(vrf_hash, my_public_address+1)
        • ...
        • hk-1 = hash(vrf_hash, my_public_address+k-1)
      • For all candidates, a total of p hash values can be derived.
      • Sort hash values
      • Select top 1 as the proposer.
      • Select the top v as the validator.
        • Some validators may be selected as duplicates.
        • Then that validator receives rewards as many as the number of wins, but works validator job as one validator
          • the number of wins: the count of my hash values in top v hash values
          • Reward = r + (w-1)*r*c, (r=reward for 1 win, w=count of wins, c is allowance constant 0<c<1)
          • And add a duplicate number of validators for verification. But they earn r * (1-c)
        • ex) below table
          • To get a lot of rewards, it is better to run many nodes in minimum-level stacking rather than stacking all your money on one node.
          • This is a policy that can lead to many nodes operating.
order candidate hash result reward (if c=1/2)
1 candi1000 0xfd024.....23a validator and proposer r
2 candi0972   validator r
... ...   validator r
7 candi1000 0xfd012....bb0 validator r/2
... ...   ... ...
v ... ... validator ...
v+1 candi4444 0xfd010...f07 part-reward validator r/2
v+2 ... ... no validator 0
  ...     0

Deciding the term for the selected validator set

Summary

We are in a situation where we need a term to point to the "selected validation set" unexisting previously.

There are two ways to define a new term.

  • Let ValidatorSet be a all candidate validator, and the new term be the selected validator set
  • Let ValidatorSet be the selected validator set, and the new term be the all candidate validators

Problem Definition

We need to unify terms in order to avoid misunderstanding when discussing and writing code. Terms that point to the following two concepts should be defined:

  • selected validator set
  • candidate validator set

If we continue to use ValidatorSet that we were already dealing with a data structure, We must first decide which of the above two concepts to use this term and then we define a new term for other concept.

Proposal

  • ValidatorSet: all candidate validators
  • ActiveValidatorSet or ElectedValidatorSet: selected validators

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Replace Tendermint's PoS to VRF-based Random Sampling

Modify current Proposer selection by round-robin in the ValidatorSet to a random selection using VRF. Using #32, #40, #45

While this modification may result in dead code for many of the priority adjustments that ValidatorSet currently have, it leaves the priority adjustment code intact because of the way the Priority in the Tendermint PoS is used to weighted random selection.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.