GithubHelp home page GithubHelp logo

casper's Introduction

casper's People

Contributors

blurpesec avatar chihchengliang avatar darcius avatar djrtwo avatar gfredericks avatar hrishikeshio avatar jamesray1 avatar jonchoi avatar jonny1000 avatar karlfloersch avatar lounlee avatar nic619 avatar nicksavers avatar paulhauner avatar ralexstokes avatar sorpaas avatar vbuterin avatar zaq1tomo avatar zilm13 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

casper's Issues

`vote_bitmap` does not need to map from `target_hash`

Issue

vote_bitmap contained within the votes struct current has the following type. We use the target_epoch to select the votes struct and then vote_bitmap is a mapping from a target_hash to a vote bitmap:

vote_bitmap: num256[num][bytes32]

The target_hash (bytes32) component of the mapping is no longer necessary. When a vote is cast, we assert that the target_hash and target_epoch must be that of the current_epoch, locking these two fields together. We can and should remove the bytes32 component of the vote_bitmap type def because the only entry into a target_epochs vote_bitmap will be for the single corresponding target_hash.

Proposed Implementation

  • Remove [bytes32] from the vote_bitmap typedef
  • Fix calls to the vote_bitmap throughout the contract accordingly.

Migrate types from `num` to `int`

Issue

num and num256 type in vyper are being deprecated in favor of int128 and uint256, respectively.

Proposed implementation

  • convert references of num to int256
  • convert references of num256 to uint256
  • convert type casting from as_TYPE usage to newer convert(var, type) format. (all type castings not just int/uint)
  • migrate validator_service.py and update testing accordingly
  • ensure vyper up to date such that it can handle these types

simple_casper_tester.py serpent.compile

When I want to use simple_casper_tester.py to test, however it says serpent module doesn't have a attribute called compile, and I have tried some different versions of serpent, are there any solutions?

Migrate purity checker to vyper LLL

Issue

Purity checker is currently written in the deprecated Serpent. We need to make updates to it but should stop relying on Serpent compiler

Proposed implementation

  • Migrate current purity checker from serpent to vyper LLL
  • Port contract to this repo
  • Add sanity check testing to pytest build
  • Compile Purity Checker from source instead of using bytecode in the testing setup

Example Vyper LLL and compile techniques here https://github.com/ethereum/casper/blob/master/tests/utils/valcodes.py

Check for validator logout in slashing is off

Issue

The following code found here in slash is incorrect

self.dynasty_wei_delta[self.dynasty + 1] -= deposit

if self.validators[validator_index_1].end_dynasty < self.default_end_dynasty:
    self.dynasty_wei_delta[self.validators[validator_index_1].end_dynasty] += deposit

This modifies self.dynasty_wei_delta for the validator's end_dynasty even after that dynasty has begun/occurred, and more importantly, it modifies the next dynasty regardless of whether the validator is already logged out. This will result in a doubly subtracted deposit total from the total deposits.

Proposed Implementation

  • Change the conditional statement to:
end_dynasty: int128 = self.validators[validator_index_1].end_dynasty

# if validator not logged out yet, remove total from next dynasty
if self.dynasty < end_dynasty:
    self.dynasty_wei_delta[self.dynasty + 1] -= deposit

    # if validator was already staged for logout at end_dynasty,
    # ensure that we don't doubly remove from total
    if end_dynasty < self.default_end_dynasty:
         self.dynasty_wei_delta[end_dynasty] += deposit
  • add relevant tests

Mismatched comparison when assessing finality

@yzhang90 noticed that comparing current_dynasty_votes and total_curdyn_deposits happens after proc_reward(

if (current_dynasty_votes >= self.total_curdyn_deposits * 2 / 3 and
), which means current_dynasty_votes is the value before the proc_reward but total_curdyn_deposits is the value after proc_reward. This seems unintentional. Both should comparison should likely happen between the two values prior to the reward inclusion or after the reward inclusion. Not with one of the values before and one after.

@karlfloersch I wanted to make sure this is a bug and not an intentional feature before I move forward with changing it.

Withdraw reward separately

Hi,
I am not sure if it would significantly increase the complexity and the costs of the contract. But if not it would be great if you could withdraw the reward separately. This should be optional. When someone choses this method the reward wouldn't be used for staking and not added to the deposit but to a different variable.
Thanks

Decouple testing between /casper and /pyethereum

Issue

I think we should decouple testing between the casper repo and the pyethereum/pyethapp repo. It makes getting certain builds to pass before merging impossible. For example because the constructor signature changes in #56, I have to modify pyethereum before the build passes, but to modify pyethereum, I need to updated code!

A better workflow would be:

  • update casper contract, tests pass, merge
  • update pyethereum to point to updated casper contract
  • fix/modify repo as necessary such that tests in pyethereum pass

Proposed implementation

  • Remove reference to docker and pyethereum tests from .travis.yml
  • Add an auto-travis build to /pyethereum
  • (consider moving dev on /pyethereum back to ethereum/pyethereum)

Update README with install and test instructions

requirements.txt has been updated to include all requirements to build the contracts and run the tests. This should be noted in the readme

Proposed Implementation

  • add build instructions (pip install -r requirements.txt)
  • add test instructions pytest tests

Upgrade to vyper 0.0.4

Issue

Vyper has released (and tagged!) version 0.0.4. Time to upgrade

Proposed Implementation

  • Update hash reference to a v0.0.4 reference
  • Update types changes (num -> int256, num256 -> uint256, etc)
  • Update type casting to use convert
  • Not sure what else, but ensure all tests pass.

Remove use of `dynasty_in_epoch` in `vote`

Issue

We still use dynasty_in_epoch here even though it is asserted that the target_epoch must be the current_epoch --> thus ensuring that the current_dynasty is self.dynasty.

Removing this dynasty_in_epoch use will make it easier for the formal verification team

Proposed implementation

Replace use of dynasty_in_epoch in the above link with self.dynasty:

current_dynasty: int128 = self.dynasty

Increase test coverage for casper contract

Issue

Need to continue to increase test coverage of simple_casper.v.py

Proposed implementation

Here is a running list. It is not comprehensive but represents only a sample of what should be done.
The tests could also use some re-organization. Maybe subdirectories for "unit" tests and "integration" tests.

  • increase coverage of deposit
    • should fail if validator already has a deposit
    • should fail if validation_addr is not pure
    • add more checks for validator attributes after successful deposit (start_dynasty, addr, etc...)
  • increase test coverage of logout
  • test withdraw
  • test test_initialize_epoch explicitly (rather than implicitly through the new_epoch helper)
    • be sure to test attempts to double initialize!
  • test the "constant" methods (maybe in new file test_constants.py)
  • integration tests:
    • check that insta_finalize kicks back in if all validators withdraw deposits

Update change in deposit scale factor

As mentioned in
https://ethresear.ch/t/simple-casper-v-py-smart-contract-questions-rewards-and-penalties/1470/3

we need some changes like this

diff --git a/casper/contracts/simple_casper.v.py b/casper/contracts/simple_casper.v.py
index eb0e6ab..f9a0b33 100644
--- a/casper/contracts/simple_casper.v.py
+++ b/casper/contracts/simple_casper.v.py
@@ -268,7 +268,7 @@ def initialize_epoch(epoch: num):
     self.current_epoch = epoch

     # Reward if finalized at least in the last two epochs
-    self.last_nonvoter_rescale = (1 + self.get_collective_reward() - self.reward_factor)
+    self.last_nonvoter_rescale = (1 + self.get_collective_reward()) /(1 + self.reward_factor)
     self.last_voter_rescale = self.last_nonvoter_rescale * (1 + self.reward_factor)
     self.deposit_scale_factor[epoch] = self.deposit_scale_factor[epoch - 1] * self.last_nonvoter_rescale

Migrate calls to public contract attributes to new vyper format

Issue

Previously to get a public contract attribute, attribute, from a vyper contract you had to prepend get_ to form get_attribute(). In the current version of Vyper, this is no longer the case. get_ is no longer supported and attributes are called with attribute().

Proposed implementation

  • Update vyper version in requirements.txt and Dockerfile
  • Update calls to public contract attributes in pytest suite
  • Update calls to public contract attributes in validator_service.py
  • Remove get_ previously named public methods to conform to new style. (example: get_deposit_size --> deposit_size
  • migrate imports from viper to vyper

For example:

index = casper.get_nextValidatorIndex()

will be converted to

index = casper.nextValidatorIndex()

Important note

This will have to be coupled with a corresponding PR to https://github.com/karlfloersch/pyethapp/blob/dev_env/pyethapp/validator_service.py and related code/tests

Exploit allowing votes on mis-matched epoch/hash

Issue

This was found by the Runtime Verification team in and initial audit of the contract. Due to an incorrect sequence of asserts, a user could presumably cast a vote for {epoch: n-1, hash: hash(epochN)}.

Below are the details copied from Daejun Park:
The vote method doesn't check the consistency between target_hash and target_epoch. On line 390, it asserts target_hash == self.get_recommended_target_hash(). Why not assert self.current_epoch == target_epoch as well?

Suppose current_epoch is n, get_recommended_target_hash() returns Tn, which is blockhash(n*epoch_length -1) and expected_source_epoch returns Sn. A validator with index Vi prepares a vote message like (Vi, Tn , n-1, Sn).

I run the vote message manually on the vote method:

  1. It passed the check on line 387, line 390 , line 392 and line 404.
  2. On line 413 and 416, the vote is accounted as a valid vote and added to the votes in epoch n-1!
  3. The validator will not get any reward on line 421.
  4. It may cause epoch n-1 to be justified on line 429, even though current epoch is n.

Proposed Implementation

  • Codify the above exploit in a test showing that it is in fact possible
  • Patch the casper vote method asserting that target_epoch must be current_epoch
  • Remove other if statements in the vote method related to target_epoch == current_epoch because was already asserted at head of method.
assert target_hash == self.get_recommended_target_hash()
assert target_epoch == self.current_epoch

validator_service.py should live in this repo

validator_service.py is currently in a fork of the pyethapp repo here, but I propose it should be ported to this repo.

validator_service.py:

  • directly depends on the implementation of the casper contract
  • shares dependencies with the casper contract
  • is automatically tested in this repo's build

Currently whenever we bump the version of vyper or pyethereum, change a public function name, or most other refactors, we have to update implementation and dependencies simultaneously in the other repo.

Proposed implementation

  • port validator_service.py to either /daemon or a new directory /service
  • update validator_service tests to run locally rather than across repos.
  • remove validator_service.py from the pyethapp fork

TODO & questions for `simple_casper.v.py`

Did a read of the latest code, and here are some questions and suggested next steps. Will be tackling it myself. Thanks to @karlfloersch and @vbuterin for answering tons of questions along the way.

HackMD Link

Economic Variables

[] solve @jonchoi
Source code


# Withdrawal delay in blocks
withdrawal_delay: num

# Reward for voting as fraction of deposit size
reward_factor: public(decimal)

# Base interest factor
base_interest_factor: public(decimal)

# Base penalty factor
base_penalty_factor: public(decimal)

# Current penalty factor
current_penalty_factor: public(decimal)

...

Line 200

# Base interest rate.
base_interest_rate = self.base_interest_factor / sqrt
# Base penalty factor
base_penalty = base_interest_rate + self.base_penalty_factor * log_distance

# where sqrt is sqrt of deposits
# and log_distnace is log of ESF
if self.main_hash_justified:
            resize_factor = (1 + base_interest_rate) / (1 + base_penalty * (3 + 2*non_vote_frac / (1 - min(non_vote_frac, 0.5))))
        else:
            resize_factor = (1 + base_interest_rate) / (1 + base_penalty * (2 + non_vote_frac / (1 - min(non_vote_frac, 0.5))))
# Delete the offending validator, and give a 4% "finder's fee"
    validator_deposit = self.get_deposit_size(validator_index_1)
    slashing_bounty = validator_deposit / 25

TODO

[] Periodic withdrawals
[] Help with test coverage
[] Modular initialization code
[] Abstract out self.total_curdyn_deposits > 0 and self.total_prevdyn_deposits > 0
[] Abstract out dynasty math (assert self.dynasty >= self.validators[validator_index].end_dynasty + 1 in line 306)
[] refactor vote
[] viper. var
[] talked with karl about making dynasties only change with validator set changes.
[] abstract out 2/3 constant throughout the code
[] refactor extracting params from votes etc
[] struct/dict for votes? (helpful for unpacking params from serialized format)
[] move finder's fee to constants
[] addr to valcode_addr

Questions

  • Revenue / Expense model vs Yield model?
  • Recommnedation helpers: Expected hash? self.expected_source_epoch
  • Why? "If either current or prev dynasty is empty, then pay no interest, and all hashes justify and finalize"
  • "Increment the dynasty if finalized" dynasty is every finalized epoch? not change in validator set?
  • why does deposit_scale_factor depend on end_epoch?
  • naming: proc_reward? process reward?
    • we should name vote proc_vote then?

Comments on code

  • ln 148 โ€“ outdated comment
  • ln 188 โ€“ why max of prev and cur?
  • why sqrt factor? add comment that we're approximating
  • ln 193 โ€“ why log of epochs? please explain
  • ln 204 โ€“ rate + factor? seems like units are off?
  • ln 57 - "current expected hash"? (main_hash_justified)
  • ln 255 โ€“ dynasty + 2?
  • ln 251 โ€“ extract32? purity checker?
  • ln 178 โ€“ recommened target hash / where is blockhash?
  • ln 350 โ€“ assert target_hash == self.get_recommended_target_hash() why does that feel like a "circular" check?
  • ln 365 โ€“ "# Record that the validator voted for this target epoch so they can't again" why not allow and slash?
  • ln 44 โ€“ votes isn't clearly an index of votes for each epoch. hmm maybe update comment

Delayed Logoff

Issue

Changing contract such that logging out kicks a validator out of the validator set 700 dynasties from call to logout rather 2 dynasties. Increases the stitching of current dynasty and previous dynasty.

Proposed implementation

  • Add dynasty_logout_delay to casper contract initialization params
  • Add dynasty_wei_delta: map from dynasty -> (wei / m). This will replace next_dynasty_wei_delta and second_next_dynasty_wei_delta
  • replace references to 2 dynasties in contract with the new dynasty_logout_delay param
  • Add logout and withdraw tests that test against different dynasty_logout_delay values
  • Update casper config in karl's deploy scripts to include the 700 for dynasty logout (not exactly sure where to do this. will add when I find it.)

Questions

  • re-read the paper on "forward" and "rear" validator sets. cur and prev dynasty should operate as is. This just pushes the end_dynasty back

restrict epoch_length in __init__

Issue

epoch_length must be less than 256, otherwise it causes recommended_target_hash to throw due to limits in the BLOCKHASH opcode.

Proposed Implementation

  • add assertion to __init__: assert _epoch_length < 256
  • add relevant tests

Reposting - Why is finality required?

Hi,

I am AI developer and quite newer in ETH with only 2 years of experience.
My doubt is about finality for why have we developed such an elaborate mechanism which could have been done by timestamping each block(which happens already) and not accepting blocks beyond their destined timestamp. In PoS, generation time is predictable with minor latency.
Question in ETH forum is here: http://forum.ethereum.org/discussion/16449/finality-in-casper-pos-why-is-it-required#latest

Memoize size of deposits

Issue

  • votes data structure is poorly named. checkpoints would be much clearer. checkpoints[epoch].is_finalized reads quite nicely (thanks @jonchoi)
  • We currently do not memoize the size of total_*dyn_deposits related to each checkpoint. Storing this data alongside a checkpoint would make it much simpler for clients to decide if a checkpoint should be finalized/justified locally (did it meet NON_REVERT_MIN_DEPOSIT)

Implementation

  • rename votes to checkpoints
  • add curdyn_deposits and prevdyn_deposits to checkpoints
  • in initialize_epoch before calling increment_dynasty, set checkpoints[current_epoch].*dyn_deposits = total_*dyn_deposits. This number will be used by the forkchoice in assessing deposit size for that checkpoint. The deposit size we care about is the value from the hash of recommended_target_hash which is of blockhash(self.current_epoch*EPOCH_LENGTH - 1) -- the block right before the start of the epoch.

Safety of insta_finalization when deposit doesn't exist

Though this might be 'too much worrying', I'm wondering about safety before deposit is deposited.

According to simple_casper code, it seems deposit_exists() is False at least 3 epochs from the initialization(2 epochs for start_dynasty in deposit, 1 epoch for the change in total_prevdyn_deposit), and the epochs are insta_finalize()d.

I understood this as it would be finalized even if some attack happens to PoW chain within the epochs without any verification by validators.

I think this risk could be solved in some way, like initializing casper contract with initial validator node run by core team, which works only for a while(until it has enough number of validators). Or we could postpone the finalization until we have enough validators.

'Connection to 52.87.179.32 timed out. (connect timeout=10)'

I am getting an error following this instructions to run the Alpha Casper FFG Testnet : https://hackmd.io/s/Hk6UiFU7z

I am trying this on Elementary OS 0.4 Loki (Ubuntu 16.04 Kernel) + Python 3.5.2 and the error appears after the third line of code:

>>> from web3 import Web3, HTTPProvider
>>> web3 = Web3(HTTPProvider('http://52.87.179.32:8545'))
>>> web3.eth.getBlock('latest')

ERROR:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/urllib3/connection.py", line 141, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "/usr/local/lib/python3.5/dist-packages/urllib3/util/connection.py", line 83, in create_connection
    raise err
  File "/usr/local/lib/python3.5/dist-packages/urllib3/util/connection.py", line 73, in create_connection
    sock.connect(sa)
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py", line 601, in urlopen
    chunked=chunked)
  File "/usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py", line 357, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.5/http/client.py", line 1106, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python3.5/http/client.py", line 1151, in _send_request
    self.endheaders(body)
  File "/usr/lib/python3.5/http/client.py", line 1102, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python3.5/http/client.py", line 934, in _send_output
    self.send(msg)
  File "/usr/lib/python3.5/http/client.py", line 877, in send
    self.connect()
  File "/usr/local/lib/python3.5/dist-packages/urllib3/connection.py", line 166, in connect
    conn = self._new_conn()
  File "/usr/local/lib/python3.5/dist-packages/urllib3/connection.py", line 146, in _new_conn
    (self.host, self.timeout))
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPConnection object at 0x7fca6621dfd0>, 'Connection to 52.87.179.32 timed out. (connect timeout=10)')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/requests/adapters.py", line 440, in send
    timeout=timeout
  File "/usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py", line 639, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/local/lib/python3.5/dist-packages/urllib3/util/retry.py", line 388, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='52.87.179.32', port=8545): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7fca6621dfd0>, 'Connection to 52.87.179.32 timed out. (connect timeout=10)'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.5/dist-packages/web3/eth.py", line 132, in getBlock
    [block_identifier, full_transactions],
  File "/usr/local/lib/python3.5/dist-packages/web3/manager.py", line 93, in request_blocking
    response = self._make_request(method, params)
  File "/usr/local/lib/python3.5/dist-packages/web3/manager.py", line 76, in _make_request
    return request_func(method, params)
  File "/usr/local/lib/python3.5/dist-packages/web3/middleware/attrdict.py", line 20, in middleware
    response = make_request(method, params)
  File "/usr/local/lib/python3.5/dist-packages/web3/middleware/formatting.py", line 23, in middleware
    response = make_request(method, formatted_params)
  File "/usr/local/lib/python3.5/dist-packages/web3/providers/rpc.py", line 52, in make_request
    **self.get_request_kwargs()
  File "/usr/local/lib/python3.5/dist-packages/web3/utils/compat/compat_requests.py", line 21, in make_post_request
    response = session.post(endpoint_uri, data=data, *args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/requests/sessions.py", line 555, in post
    return self.request('POST', url, data=data, json=json, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/requests/sessions.py", line 508, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.5/dist-packages/requests/sessions.py", line 618, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/requests/adapters.py", line 496, in send
    raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPConnectionPool(host='52.87.179.32', port=8545): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7fca6621dfd0>, 'Connection to 52.87.179.32 timed out. (connect timeout=10)'))

Also ping 52.87.179.32 gives the following statistics:

--- 52.87.179.32 ping statistics ---
414 packets transmitted, 0 received, 100% packet loss, time 422891ms

Can you confirm that http://52.87.179.32:8545 is a valid address?

Use capital letters for constants

Issue

Constant var names

There is no distinction from a style perspective in the difference between variables that change and those that should remain constant. Python coding style usually dictates that constants be defined with uppercase like CONSTANT_VAR_NAME. The sharding team has begun to define constants in the SMC this way. See here for an example.

For the sake of clarity in reading the contract and to keep eth core contracts more uniform, we should move to this style

spaces between functions

Python style dictates two new lines between functions (unless included within a class). Looks like SMC is using this style convention.

Proposed Implementation

  • Rename all constant vars with uppercase names
  • Ensure two new lines between each function declaration.

Practical Mining Pool is Possible

This is a WARNING about the underestimation of the risk of mining pools in PoS.

Ethereum FAQ (https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ) and some papers like Proof of Activity (http://eprint.iacr.org/2014/452) have analyzed the mining pools in PoS. They think it is not that practical because of a high risk.

  • The sketch of mining pool: Everyone sends the money of a fund manager, who is delegated for validation. The fund manager distributes the Tx fee to the people who contribute the lucky coins, itself for business, and other clients in the pool (optional).

  • Rationale: running a full node can be tiring. money are supposed to be highly distributed in many accounts. if most users cannot be online or are just reluctant to run a full node, they may be happier to delegate the money, if they trust the fund manager.

  • However: The risk of putting money to the fund manager is high. The fund manager can just steal all the money. This mining pool is much dangerous than that in PoW -- the latter you never lose your money, but at most get nothing.

Summary of the current idea

  • It is possible to put money to someone.
  • It is highly risky. Let us assume that would not be a general practice.

But what if...

  • We have a way to ensure the fund manager cannot steal your money.
  • We ensure the fund manager does what he promises to the clients? (note: not to the community)

If this is possible, then we have practical mining pools -- you don't need to run a full node.

  Join CoinBase Today! Earn Interest Every day! 100% Secure from Experts!

  Even if CoinBase were under full control of ISIS, your money is 100% Secure!

Imagine how participants think about the above advertisement if it is ALMOST TOTALLY TRUE.

Construction

  • A paper on PoW, but construction applys to PoS is from Dr. Loi Luu: https://www.comp.nus.edu.sg/~loiluu/papers/SmartPool.pdf
  • Please refer to this post (https://medium.com/@loiluu/casper-sgx-8475e56244b) in the section "Hardening against long-range forks", which is talking about a different thing, but actually, constructs such a mining pool. (and they have figures!)
  • We can use Software Guard Execution (SGX) of Intel. It provides a trusted environment. Programs and code inside cannot be read even by OS and memory readers. And it can prove to anyone in the world about what is running inside, called the ``enclave''.
  • SGX has been used for many research projects, like data analytics, federated learning, network middlewares, all without secret leakage, with very practical performance.
  • Amazon starts to offer SGX support in virtual machines. Others may follow.
  • Many research papers in smart contracts start to use SGX. For example, Town Crier by Cornell to trustable feed data into the smart contracts.
  • The authors of PoA have no responsibility to consider SGX -- that paper is 2014. But SGX comes in 2015.

Should I trust Intel?

  • The question is not you. Actually, is "will actually-a-little-bit-rational human beings take the risk to trust Intel?".
  • The answer is almost yes. Your CPU is Intel or AMD. Your GPU is AMD or Nvidia. If Intel did release a microcode update to add a backdoor, then security is a dream.
  • Let us make the assumption that a large group of users trust. Note that I am not assuming the Intel is trusted. I am assuming that many people believe.

So far so good?

  • We can have several fund managers in the world. They provide an SGX-backed validation-delegated service. People share the secret key of their wallet (not sending the money) to the enclave after they validate the enclave. The enclave uses ZeroTrace (https://eprint.iacr.org/2017/549.pdf) to read/write the database to fully hide the access patterns -- nothing is leaked.
  • The enclave behaves as a big stake -- when one of the coins under its management is hit, it finishes the validation and takes Tx fee. It will send Tx fees as promise.
  • Users' secret keys are 100% secure. Enclave can prove that the key never leaks to the outside.
  • Validation almost has no slow down.
  • Users' money is secure and return is guaranteed.

Bad thing?

  • You may feel that SGX can guarantee the correctness, so it does no harm to the community.
  • The fund manager only has the responsibility to be correct to its clients. No need to hold the responsibility to the community.
  • The fund manager can just remove some Tx information at the network level -- or pretend it does not receive that message. Anyway, it has the right to choose.
  • Mining pool comes again.

Will mining pools in PoS be terrible?

  • Snow white (https://eprint.iacr.org/2016/919.pdf) is a paper in PoS which analyzes the corruption. Please jump to page 16. The figure shows that, if for 16.5% centralization, a reasonable risk would require 25 blocks to be verified. And they said

     In all configurations, Snow White needs to wait for 
     34% to 43% more blocks than Bitcoin for the same 
     consistency failure probability.
    
  • The mining pool is much serious in PoS!

SGX is something existing?

  • For Intel CPU higher than Skylake, it supports SGX.
  • My laptop can run a startup for delegated validation :)

So everyone can run a mining pool?

  • Yes.
  • Decentralized? There will be many small pools? So the problem is solved?
  • Nope. The big one takes little efforts to solve all, it can ask for a very small management fee.
  • Don't forget the fund manager distributes the money to all in the pool!
  • Do you still remember why people join a PoW mining pool? Individual mining does not reduce your chance to generate a hash. But you may never have that day.
  • Do you still remember that network latency matters? Coinbase may have the world's least latency server -- to broadcast its validation sharply to every full node upon a successful validation.
  • Do you know that many websites for shopping, but students are attracted often by Amazon -- the advertisement works.

Wait! Can you summary a little bit?

  • We will have several big mining pools in PoS systems.
  • Although (some) developers don't trust them, the massive people turn out to trust -- the risk is almost zero, right?
  • More secure than JPMorgan Chase?
  • The mining pools are "UN Security Council's five permanent members". They validate almost everything.

What should we do?

  • Seem no immediate quick-fix.
  • Let us sit down for a moment.

What do you think will help?

  • This question is much difficult to handle.
  • I would suggest Ethereum to invite researchers for a discussion -- especially, how to thwart SGX?

Issue with typecasting in assertions

Issue

Vyper performs implicit conversion for ==, causing a few previously uncaught sources of error.

For example in deposit and logout function, assert self.current_epoch == block.number / self.epoch_length which converts self.current_epoch to decimal rather than converting block.number / self.epoch_length to an int. This line should instead be assert self.current_epoch == floor(block.number / self.epoch_length).

Proposed implementation

  • Fix assertions mentioned above
  • Comb the contract for other incorrect implicit type casting and fix
  • add any relevant tests.

remove `penalty_factor` contribution to reward_factor under normal conditions

Issue

Reasoning about and modeling rewards/penalties is particularly harry because penalty_factor makes a 2x contribution to reward_factor when the chain is being finalized under optimal conditions because in that case esf() == 2

self.reward_factor = adj_interest_base + self.penalty_factor * self.esf()

Suggested Implementation

change the above code to the following to remove penalty_factor when esf is 2

self.reward_factor = adj_interest_base + self.penalty_factor * (self.esf() - 2)

Partial Slashing

Issue

Need to implement partial slashing to reduce the discouragement of validating for the occasional not-so-malicious incorrectly signed messages.

Proposed Implementation

The following is a rough outline and needs to be cleaned up

  • maintain a mapping epoch => total_slashed, keeping track of the total amount of validator deposits removed through slashing up until that point. When you initialize a new epoch, set total_slashed[now] = total_slashed[now - 1], so the total carries over.
  • let recently_slashed = total_slashed[now] - total_slashed[now - withdrawal_delay]
  • When a validator is slashed, add their entire deposit size to total_slashed[now]
  • When a validator gets slashed, forcibly log them out for dynasty n+1. Set a flag forcing them to be slashed when they withdraw. When they initiate the withdrawal, the following fraction of their deposit is slashed
fraction_to_slash = (total_slashed[withdraw_time] - total_slashed[withdraw_time - 2 * withdrawal_delay]) * 6 / total_deposits_at_logout

Additionally their deposit is calculated based on the deposit_scale_factor at time of withdrawal, so they have to suffer all inactivity penalties up until the block during which they withdraw

This slashing occurring at withdrawal ensures that if a coordinated attack occurs and a significant amount of validators are slashed, then the attackers all suffer equal magnitude slashing.

Validators gain/lose value during their 2 dynasty waiting period

Issue

@yzhang90 pointed out that because a validator's deposit is scaled at current_epoch when they make a deposit but that they don't get to participate in consensus until 2 dynasties from then (at least two epochs), the validator has a period of time in which they gain/lose ether but cannot participate in consensus.

Proposed Implementation [UNFINISHED]

The below proposal is incomplete
Instead of storing a scaled ether deposit when the validator logs in, just store the raw ether, scaling by both the start_dynasty and end_dynasty scale factor when relevant (withdrawal`).

This works for the individual deposits but does not solve the issue for dynasty_wei_delta.

Reduce gas limit in calls to validator.addr

issue

STUB: Currently 500k. Too high (will fill in more reasoning later)

Suggested Implementation

  • reduce to 200k
  • Maybe make a variable VALIDATION_GAS_LIMIT or something because this magic number is used both in vote and logout
  • Ensure discussion of this fact in VALIDATOR_IMPLEMENTATION.md

deposit_scale_factor in __init__ might cause Casper to stall

Found this while testing some code with Casper today when deploying to a testnet where the contract is deployed after the first epoch has passed. I might be missing something here, but it looks like this issue would stop Casper working immediately when deployed to any network that has been running for a while, including mainnet.

The issue

An example where Casper is deployed to a network that is on it's 1000th block and epoch length for Casper is 50 blocks. When initialised, Caspers current_epoch is set as self.current_epoch = floor(block.number / self.EPOCH_LENGTH), so in this example, it would set its current_epoch as 20.

Now this main issue is with self.deposit_scale_factor in the constructor. It's an array of deposit scale factors with epochs as keys. However it is initialised as self.deposit_scale_factor[0] = 10000000000.0 which is what creates the issue as it assumes the current epoch is 0 and sets the scale factor for this.

self.deposit_scale_factor[0] = 10000000000.0

When a deposit is attempted to be made in this scenario, it will attempt to calculate the scaled deposit for the new potential validator as such scaled_deposit: decimal(wei/m) = msg.value / self.deposit_scale_factor[self.current_epoch]. If our current_epoch is still 20, the self.deposit_scale_factor[self.current_epoch] will return as 0 and cause the divide to throw, thus meaning no deposits can be made at all and Casper is stuck.

scaled_deposit: decimal(wei/m) = msg.value / self.deposit_scale_factor[self.current_epoch]

The solution

A possible solution I've tested is to move self.deposit_scale_factor[0] = 10000000000.0 down two lines and change it too self.deposit_scale_factor[self.current_epoch] = 10000000000.0 which fixed the issue for me.

Vyper code uses meter label

I was going through the code of simple_casper.v.py and saw a strange use of the m labeling.
In vyper, m means meter. So why does casper uses it (mostly as wei / m) in so many places?

Variable min deposit size to promote consistent # of vals

Issue

Defining an explicit, unchanging value for the minimum size of deposits is risky because we do not have a clear idea as to how many validators will show up. Instead, we want to define a base deposit value and have the minimum a validator needs to deposit scale as the number of validators scales.

Proposed Implementation

  • Track number of active validators with a new contract variable num_validators set to 0 at contract creation
  • When validator sends deposit assert msg.value >= max(self.min_deposit_size, num_validators)
  • Increment num_validators when a new validator sends a deposit
  • Decrement num_validators when a validator submits a logout or is slashed (if they haven't logged out already)

Question/ Possible bug in voting

Hello,
if (current_dynasty_votes >= self.total_curdyn_deposits * 2 / 3 and previous_dynasty_votes >= self.total_prevdyn_deposits * 2 / 3) and \ not self.votes[target_epoch].is_justified:
My question is shouldn't total_curdyn_deposits_scaled be used instead of total_curdyn_deposits ?
I think that with this code if validator goes offline there will be no finalization.

Clarification for comment

# Mapping of validator's signature address to their index number

Comment refers to signature/validation address but the assertion on line 299 references the withdrawal address.

My understanding is that validator_indexes tracks withdrawal addresses to avoid duplicate validators and the comment should be changed to reflect this.

However it would seem that the delete_validator function removes the contract's ability to track this over time; so if someone can confirm the right behavior then I'm happy to make a PR to reflect this comment change.

Add logging for asserts in contract

Issue

Although they plan on adding support (vyperlang/vyper#523), Vyper currently does not support logging when asserts fail.

Proposed implementation

  • create a method assert_and_log(conditon, message)
  • Migrate all calls from assert to assert_and_log to aid in debugging the test net

NOTE: This code has not been tested.

Error: __log__({message: bytes <= N})

@private
def assert_and_log(condition: bool, message: bytes <= N):
    if condition:
        log.Error(message)
        assert(False)

def some_method(val: num):
    assert_and_log(num < 5, b'The number passed into `some_method` is not less than 5')

We need to decide on a value for the max bytes length of error strings.

Reduce max length of signature in vote message

Issue

Vote messages must be less that or equal 1024 bytes, defined by the type in the vote method signature. When parsing the signature of the vote from the vote message, we currently only enforce that the signature too is less than or equal to 1024 bytes.

sig: bytes <= 1024 = values[4]

Due to the variable amount of bytes required to encode the other elements in the list, there is a range on the maximum length of a signature depending on the epoch or even the validator_index. To enforce more strict requirements, we propose restricting sig to length less than or equal to 934 bytes. This number assumes the other elements of the vote message take their maximal length to encode.

1024 bytes available
3 to encode the whole list
17 worst case to encode an int128
33 to encode the bytes32 hash
3 to encode the signature bytes

1024 - 3 - 17*3 - 33 - 3 == 934 bytes max for signature

Sanity checked with the following python code

import rlp
validator_index = target_epoch = source_epoch = 2**128 - 1
sig = b'\xff' * 934
target_hash = b'\xff' * 32
len(rlp.encode([validator_index, target_hash, target_epoch, source_epoch, sig])) == 1024

Note, the logout message has fewer elements so the signature could theoretically be larger than 934 for this action, but to reduce complexity, 934 should be used for logout messages as well.

Proposed Implementation

  • Define MAX_SIGNATURE_LENGTH as 934
  • Enforce sig as <= MAX_SIGNATURE_LENGTH in vote, slash, and logout
  • tests.

GAS opcode should be included in purity checker

Issue

Purity checker does not currently include the GAS opcode. This opcode can reveal information about the outside world and can make the result of the function not entirely deterministic upon its input arguments.

Proposed Implementation

  • Add GAS opcode (0x5a) to the purity checker
  • Compile to bytecode and integrate into testing setup in conftest.py
  • Update deploy scripts

deposit_scale_factor might be zero

In initialize_epoch, new deposit_scale_factor is computed as:

    self.consensus_messages[epoch].deposit_scale_factor = something * (1 - 2 * base_coeff)

This can be zero because base_coeff can be 0.5:

    base_coeff = 1.0 / sqrt * (self.reward_at_1m_eth / 1000)

when sqrt happens to be 160.

This seems like a problem because sometimes deposit_scale_factor divides some other scaling factors.

unclear function selector in call to purity checker

assert extract32(raw_call(self.PURITY_CHECKER, concat('\xa1\x90>\xab', convert(validation_addr, 'bytes32')), gas=500000, outsize=32), 0) != convert(0, 'bytes32')

Looks like a typo...

The raw_call presumably calls the submit function here: https://github.com/ethereum/research/blob/master/impurity/check_for_impurity.se

And following our convention we see:

> web3.sha3("submit(address)")
"0xa1903eab65b303e5d2b0f7ffc455cbdb5d081456ef23967ae426c2cada7ffffd"

which implies the method selector should be '\xa1\x90\x3e\xab'.

If someone can confirm I'll submit a PR.

Port validator_service.py to web3.py

Due to the pending deprecation of the pethereum/pyethapp stack, I propose porting existing validator_service.py functionality to use web3.py. Not only is web3.py well-featured and actively developed, web3.py can plug into any client that conforms to the standard json-rpc interface. This is particularly exciting because a web3.py validator_service will be able to plug into any of the major clients (yay code reuse).

Proposed Implementation

I haven't fully spec'd this out yet, but it should be a natural translation from pyethapp to web3.py. More details to follow.

  • port functionality 1:1 from pyethapp to web3.py
  • new service will live in this repo
  • move contract tests to tests/contracts
  • add new service tests at tests/service

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.