chainflip-io / chainflip-eth-contracts Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Currently the contracts are deployed using constant values as the keys.
cf.keyManager = deployer.deploy(KeyManager, AGG_SIGNER_1.getPubData(), GOV_SIGNER_1.getPubData())
We want to be able to set the AGG_PUB_KEY
and GOV_PUB_KEY
(not sure how this relates to the Signer()
) using Env Vars.
Set the dependencies to set versions, especially in the package.json
upgrade the prettier version to a non-beta version. That will change the linting all the solidity files.
Provide a method to update the Vault address in the CfReceive contract, given that we don't use a router.
I still use about 2ETH on every deploy of the contracts to Rinkeby... so I run out of ETH quite quickly.
With information about the license that we want to apply to our contracts.
Not sure what this will be yet, I'm a fan of GPLv3.
Implement a community override key that accomplishes the following:
The ability for the governance key to move funds out of the Vault IFF
Similar should be done to the stake manager to gate the govWithdraw function. So it can probably be done by inheritance.
In order to ensure that the total supply of our ERC20 token is correct, it's important that we synchronise it with the State Chain's view of the token supply. In theory, the State Chain is "in charge" of inflating (and/or deflating) the token in various ways. These include staking rewards, liquidity incentives, slashing, and potentially more. In practice, the StakeManager
contract inflates the actual ERC20 token supply.
Previously, the decision was made to prevent the State Chain from having total control over the FLIP total supply in order to reduce the total profitability of gaining control of a malicious majority of Validators. Instead, the inflation of the ERC20 token is currently handled by the StakeManager
contract and is based on the number of Ethereum blocks which have passed since the last mint.
It is now realised that the safety provided by preventing the State Chain's unilateral control over the ERC20 total supply is superceded by the 48h claims delay in the StakeManager
contract. Broadly speaking, even if a malicious majority of Validators were to mint max(uint256)
tokens, we can ensure that those tokens are locked in the StakeManager
contract and only accessible via the claims process, which already has a 48h delay in order to give Chainflip governance time to respond to anything fishy.
Thus, it should already be impossible for a malicious majority to steal any FLIP from the StakeManager
, so giving the State Chain control of the total supply is safe, and ensures that we can keep the views of the total supply in sync across the State Chain and Ethereum.
The proposed changes are the following (all apply to the StakeManager
contract)
_emissionPerBlock
EmissionChanged
setEmissionPerBlock
_mintInflation
getEmissionPerBlock
getInflationInFuture
getTotalStakeInFuture
updateFlipSupply
updateFlipSupply
/// @dev This method compares a given new FLIP supply it against the old supply, then mints and burns as appropriate
/// @param SigData sigData - signature over the abi-encoded function params
/// @param uint newTotalSupply - new total supply of FLIP
/// @param uint stateChainBlockNumber - State Chain block number for the new supply
function updateFlipSupply(
SigData calldata sigData,
uint newTotalSupply,
uint stateChainBlockNumber
) external nzUint(newTotalSupply) noFish validSig(
sigData,
keccak256(
abi.encodeWithSelector(
this.updateFlipSupply.selector,
SigData(0, 0, sigData.nonce, address(0)),
newTotalSupply,
stateChainBlockNumber
)
),
KeyId.Agg
) {}
flipSupplyUpdated
event FlipSupplyUpdated(uint oldSupply, uint newSupply, uint stateChainBlockNumber);
The above changes assume that the StakeManager contract is some kind of admin
for the FLIP token and has the power to mint
and burn
tokens. This might not currently be true, which would mean there's additional work required to give the StakeManager control over the FLIP token.
The web team was discussing a new feature request where we would display labels for validators that map to real-world identities, such as "E-Girl Capital". We'd like to do this in the most trustless way possible.
What if we had a data structure somewhere on-chain that mapped a staker's Ethereum address to either a string or an ENS owner address?
With this stakers could register their label themselves on Ethereum. Our statechain-cache would then return the label from there, and not an internal data source.
Lint README and substitute no leading 0x
for with leading 0x
for the GOV_KEY.
Look into the priority fee hardcoded into deploy_initial_Chainflip_contracts, and to an extend for other scripts deploying to live networks.
Without it, there is errors when sending transactions to a live network complaining that there gas_limit (or prio fee) is too low.
We can leave it as is or use "auto" - then the fee is determined automatically via web3.eth.max_priority_fee
.
When deploying and initializing the contracts we call the setCanConsumeKeyNonce
function in the KeyManager. We rely on no frontrunning. Should we improve that to avoid any type of frontrunning? I think so, just for safety.
We could potentially look into having a factory deployer contract for this function and for Stakemanager´s setFlip
. Especially now that no tokens nor rights are given to the msg.sender (EOA or Factory in this case). A factory would remove the need for the onlyDeployer
checks to avoid frontrunning (all deployments and initializations will happen atomically). Factory could potentially reach a gasLimit, so to be checked. It shouldn't reach bytcode size limit if we make the factory deploy all the contracts in the constructor, as it will then be part of the creation code.
Create a mockToken that doesn't return a bool and check that the Vault transfer function works as expected, as we have an own simplified implementation of the safeTransfer.
I could actually reuse one of the mockTokens used => make it so it doesn't return a bool.
Add pre-commit hook running the linting check to get the errors before CI does.
This is the current interface for cross-chain calls:
function xCallNative(
//...
uint256 dstNativeGas,
address refundAddress,
//...
) external;
Do we want to future-proof the Vault for retrospective refunds, passing that refund address to the cross-chain call functions? This would be the address to refund the remaining gas paid by the user but not used in the egress chain. We will not have this feature on launch but we might have it later on.
Related to the previous question, the first discussion pointed towards having a uint256 dstNativeGas
parameter in the xCallNative
and xCallToken
for the user to set the amount of gas that should be used in the egressChain. We would take an equivalent amount from the ingress token and swap it to the native egress token. There might be a scenario where we can't implement that logic for launch. In that case, we might hardcode the amount of gas to use - should we then keep dstNativeGas
and refundAddress
to future-proof the contract?
Do we want to future-proof it with gas-topups functionality?
For gas top-ups, it would be good to provide a way for the user to top up the gas for cross-chain calls. Topping up on the ingress chain is what is more intuitive, as that is the chain that the user has funds in. However, any topped up gas will need to be swapped into the destination native token.
Topping up on the destination makes it easier for us if paid with the native token - then we don't need to swap it. However, it's not clear that the user will have funds there.
Assuming swapIDs are not on a per chain basis, adding topup functions to the Vault would allow for both ingress and egress topups unless we do some checking in the StateChain e.g. a call to the Vault in Ethereum chain can only top up incoming cross-chain calls to Ethereum, and if the swapID is not for a call like that we dismiss it.
I would argue that the best option seems to be allowing anyone to top-up any cross-chain call with any token. Then we swap the incoming token to whichever egress chain token is required.
Here is a code snippet:
event AddNativeGas(bytes32 swapID, uint256 amount);
event AddGas(bytes32 swapID, address token, uint256 amount);
function addGasNative(bytes32 swapID) external payable xCallsEnabled {
emit AddNativeGas(swapID, msg.value);
}
function addGasToken(
bytes32 swapID,
IERC20 token,
uint256 amount
) external nzUint(amount) xCallsEnabled {
IERC20(token).safeTransferFrom(msg.sender, address(this), amount);
emit AddGas(swapID, address(token), amount);
}
We discussed implementing a try-catch in the Vault's fetch function when calling a fetch function of a deployed Deposit contract (address passed as a parameter of the function call). I implemented that as it was very cheap gas-wise and it allowed us to catch cases in where something was going wrong. Especially to avoid reverting a batch.
However, I have found a small interesting behavior of the EVM. The try-catch will catch the case in which a call is made to an existing contract that doesn't have the fetch
function implemented. However, it will not catch (and will therefore revert) the cases in which it's an address without a smart contract deployed or an EOA. I spare you the details of how and why.
This can be solved by adding a check for existing bytecode in the target address before the try-catch statement. That adds around 400 gas, which is not a lot, but needs to be performed for every fetch address in a loop, it's not a once-per-function thing.
Here are the options:
1- No try-catch nor bytecode check. We trust that our address system will work. The downside of this is that it can cause a batch to revert and that we can't witness which fetch went wrong in case of a failed batch.
2- Only try-catch. This adds little gas and allows us to catch a call to a contract that doesn't have the fetch function implemented. The problem with this is that it still doesn't catch the most likely issue: passing an address that could be derived from our Vault but that doesn't have a Deposit contract deployed yet for some reason. Or any other bug/mulfunction for that matter.
3- Try-catch and bytecode check. This should catch all scenarios but it's more expensive gas-wise.
So the question becomes, how much do we want/need this check? Are we confident that we will always be passing the correct address in fetch calls? How cumbersome (aka prone to bugs) is the StateChain logic tracking swapIDs and used addresses (once implemented)?
PD: @cleanunicorn A low level call doesn't really solve this without the bytecode check, since a low-level call to an address without logic will return true (I would want Vitalik to explain this one...).
Would be nice to know our test coverage but this bug is preventing it - will investigate at some stage.
Some of the instructions in the README seem to be outdated, i.e.:
yarn
in the repo root dir does nothingnpm install -g ganache-cli
)Rerun slither and docgen again.
According to: https://python-poetry.org/docs/dependency-specification/
pyproject.toml
specifies ^3.7.3
for the python version this means that a version greater than 3.7.3 but less than the major version 4.0.0. However, @flip-intern encountered an issue on 3.10.x that I didn't on 3.9.9.
We should use hard version numbers rather than ranges imo, else it breaks when you come back to it after some time (like it did for Lars)
Primary developer: @albert-llimos
Reviewers: @morelazers
Currently the StakeManager contract owns the FLIP token, with no way of modifying the owner, since the StakeManager is unable to call the FLIP token's transferOwnership
.
Given that we would like to modify the StakeManager, this will require us to redeploy the FLIP token such that it is owned by the new StakeManager.
In the future, obviously we want to be able to avoid this, there are a few ways that we could do that:
transferFlipOwnership()
method into the new StakeManager which is gated in the same way as the updateTotalSupply()
method.We can plan the actual approach inside of this issue. It's worth noting that our approach should make considerations for the "special" transaction signatures which are necessary from the Validator set; such as moving all FLIP from OldStakeManager to NewStakeManager; dealing with any pending claims etc.
We will want to make similar considerations for the KeyManager and Vault too.
We seem to be using issues as ad-hoc discussion boards. I got a notification that github has a new 'discussions' feature - maybe we should enable these on this repo?
(@morelazers )
This script should takes actions such that both: Refund
and RefundFailed
events are emitted.
If we end up using a cheap-chain for auxiliary purposes, we will need to add USDC rebalancing between ETH and the cheap-chain. This can be done via Axelar or via Circle interoperability protocol.
In any of those two cases, logic needs to be added in the Vault contract to make those calls. That would be both to ingress or egress USDC. For that, there is several options:
1- We add arbitrary function calling from the SC so we can call whatever we want.
2- We hardcode a call to the Circle USDC contract or to Axelar (or both).
TBD.
USDC CCPT is already deployed on ETH-AVAX testnet, so it could already be tried out.
Follow-up on all the items needed for token launch - needed scripts, small changes to deployment of smart contracts...
For safety mechanism purposes, it is OK to remove enabled bools swaps through the Vault. We don't want to add 2k gas per swap. Worst case scenario, if the protocol is frozen (governance Action), we can continue to witness them and queue them or just dismiss them. Or in case of withdrawing all the funds via governance key we can return them to it's owner.
It was initially discussed that it would be good to have in order to deploy the contracts and have the swaps functionality disabled. Then enable it when we have liquidity and the protocol is ready to perform swaps.
I assume there will be some similar mechanism in the SC and then we would just not process any swaps that through the Vault until swapping is enabled at the protocol level. So that use case might also not be relevant anymore. Quoting Dan:
"For the specific narrow use case of preventing people from using the contract before we are able to swap: I don't think we need this. We can prevent retail from doing this by simply not enabling it on the app. And any power users who are reckless enough to try can live with the consequences. Right now, we don't handle failed swaps very gracefully: if the swap can't be executed we [checks notes...] default the swap output to zero and simply forget about it. There's an open issue to handle swap failures, but it merits some more discussion."
The current fetch functions check the suspension flag. This is so in case of emergency they get suspended too. However, in that case the StateChain will also be suspended (emergency or safeMode) and no new signed function calls should happen. And even if they are still submitted, the execution of fetch functions are not very risky anyway.
There is an argument to leave those ones in the case where something is faulty in the fetch mechanism and we are basically burning gas when fetching (e.g. fetching from the wrong addresses). However, that doesn't drain it either, it would get drained by refunding for the gas. In that case we need to pause the whole swapping mechanism at the protocol level anyway, so it probably doesn't matter too much.
The argument in favor of removing it is that we might want to be able to fetch tokens from ingress addresses even when paused. Otherwise, if we can't resume, those tokens will be lost forever. Furthermore, it will be cheaper to fetch, but that only applies to the fetch-only functions (Fetch
and DeployAndFetch
) so it shouldn't make much of a difference, as we expect AllBatch
to be the one normally used.
We can consider removing that flag also from the swapfunctions to save 2k gas per swap. However, that means the functions wouldn't be suspended, so we rely on the StateChain being paused. To discuss with Dan.
Because of the way the Eth Observer works now, it needs to receive a block after it start witnessing before it can do a lookup for the past logs. But because the script that makes all the blocks with the events in them runs before the integration test, it will never do the lookup.
A) Make the hardhat node auto-mine at a set interval (eg. 1sec). This will give the integration test an empty block to kickstart its EthObserver. We should also add a timeout to the integration test that is larger then this auto-mine interval.
hardhat.config.json:
mining: {
auto: true,
interval: [1000, 1000]
}
Any problems with this running on the CI? @tomjohnburton
B) We could run the deply_and script after we start running the integration test. This requires timing and will be prone to failure.
C) Modify the CFE to change its behaviour for during the test. Not nice.
This is to turn the Vault, StakeManager, and KeyManager into upgradable contracts - such that the underlying logic can be changed, but the storage of variables aren't changed in an upgrade. This is to allow for feature upgrades and bug fixes.
I'm not 100% sure yet exactly what kind of upgradable contract we should use, so some research into that needs to be done. I did a bit of initial research and it seems that OpenZeppelin only supports 1 kind of upgradable contract, though we can't use their tools for it because they're for truffle/hardhat, but that shouldn't be much of an issue.
This should only need minimal changes in the aforementioned contracts, and shouldn't need any of the actual tests to change, though the setup for tests in conftest.py
will need to change to deploy the proxies and point them to the logic contracts etc.
Additional tests will be needed that go through the upgrade process will need to be done, and potentially add upgrading the contracts to the stateful tests (already slightly dreading that lol).
We also need to decide how the proxies will be managed - the only thing that springs to mind is a Gnosis Safe? And since it was discussed that the gov key should be changed to a regular ETH account, which would probably be a Gnosis Safe, it probably makes sense to have that account/Gnosis Safe control the proxies aswell as be the gov key in the StakeManager?
Proposed branch name: feature/upgradable/CH150
Review contracts for function calls that users can trigger when the protocol is paused.
Such that our validators don't have to spend ETH.
Ensure all events are emitted. For instance, TransferTokenFailed is not emited. Not sure about TransferNativeFailed.
Also check, for instance, GovAction.
Also, removed unused ones - e.g. swapEnabled. Script is failing atm.
Add a case where a single transaction triggers two times the same event - transfer failed can be trigger multiple times in the same batch. Just to ensure our witnessing is robust.
There is a timeout Emergency mechanism to gate some functions so they can only be called after 2 days of no new signed messages. That would be in case of the network being halted. That applies for:
1- KeyManager - to update the AggKey with the GovKey. It makes sense to have it so that GovKey can't rug.
2- Vault - in govWithdrawal. Originally that function was only gated with onlyGovernor
. However, we added a onlyCommunityGuardDisabled
. Therefore, I am not sure the timeout is needed. It does make sense in case of the network being halted, but if the AggKey is compromised and we have managed to suspend the contract in time, they can continue validating signatures (since not all the signed functions are suspended) and we will never be able to get the funds out. The argument to not change it is that most likely in that case (aggKey compromised) it's already too late, and that without that the GovKey and CommKey can rug a perfectly functioning network.
When we're deploying the network for the first time, our Genesis Validators will need a small amount of FLIP staked to them on the State Chain else they cannot pay for transaction fees (and thus cannot register the stakes of anyone else).
The solution to this issue is to leave some of the initially minted FLIP inside the StakeManager when it is deployed. This FLIP is then by-default split amongst the Genesis Validators (the precise accounting is the responsibility of the State Chain).
The proposed implementation is to change the current constructor function in the StakeManager
from this:
constructor(IKeyManager keyManager, uint minStake, uint flipTotalSupply) {
_keyManager = keyManager;
_minStake = minStake;
_defaultOperators.push(address(this));
_FLIP = new FLIP("ChainFlip", "FLIP", _defaultOperators, msg.sender, flipTotalSupply);
_ERC1820_REGISTRY.setInterfaceImplementer(address(this), TOKENS_RECIPIENT_INTERFACE_HASH, address(this));
}
To this:
constructor(IKeyManager keyManager, uint minStake, uint flipTotalSupply, uint numGenesisValidators, uint genesisStake) {
_keyManager = keyManager;
_minStake = minStake;
_defaultOperators.push(address(this));
uint genesisValidatorFlip = numGenesisValidators * genesisStake;
_FLIP = new FLIP("ChainFlip", "FLIP", _defaultOperators, msg.sender, flipTotalSupply - genesisValidatorFlip);
_FLIP.mint(address(this), genesisValidatorFlip, "", "");
_ERC1820_REGISTRY.setInterfaceImplementer(address(this), TOKENS_RECIPIENT_INTERFACE_HASH, address(this));
}
And then change all the tests to ensure correctness.
It is to be noted that this requires us to synchronise the values numGenesisValidators
and genesisStake
with the values that we will use to launch the network at genesis. This is the responsibility of the State Chain genesis config and is out of scope of this issue.
Update hardhat version to 2.6.1 since Kyle has reported issues when subscribing to events in 2.6.0
Brownie compilation fails in the recent runs on the self-hosted runners. E.g.
https://github.com/chainflip-io/chainflip-eth-contracts/runs/7073638252?check_suite_focus=true
Locally it works and running on github-runners also seems to work, so it looks like an environment issue in the self-hosted runners.
Once the contracts get close to being frozen for deployment, the settings of the optimizer should be tweaked to try to find a good balance between bytcode size and optimization for execution costs. Given that we have now changed the approached to fetching and we will no longer have to worry a lot about bytecodesize for the Deposit contracts, we can probably increase the degree of optimization.
We currently have a boolean to enable/disable swaps through the Vault. In PR #242 the boolean is used to enable/disable cross-chain calls instead (not swaps) . Having a control variable like that adds 2100 gas (sload) to each function call but it adds the capability to stop swaps/calls in case of emergency. So it's not free.
It needs to be discussed with calls we want to be able to stop, also depending on what and how we implement an emergency mechanism in the witnessing and rest of the protocol. It can even be that we want two different variables, one for swaps and one for cross-chain calls.
Create a script to deploy multiple tokenVesting contracts. Also, look into whether we should deploy them via EOA and then transfer FLIP into it from the safekeeper manually, or if we should first transfer the FLIP from the safekeeper to the EOA and do everything from it.
The script will probably be something similar to the airdrop - reading addresses, type and amount from a file, deploying a TokenVesting with the set beneficiary and type of contract, and then transfer the corresponding FLIP.
Instead of export a WEB3_INFURA_PROJECT_ID
it should be endpoint agnostic, and just take a full WS url.
Based on the conversation here: https://github.com/chainflip-io/design-documentation/issues/10
We should allow any stakers to withdraw their full deposit for 24 hours after depositing.
This allows us to ensure that those who submit invalid data (or have another problem with their infrastructure) can get their funds back promptly without us being required to notarise anything additional on the state chain.
Any deposits which have been in the contract for longer than 24 hours are considered valid stakes, and withdrawing them should take the full unstake period.
chainflip-eth-contracts/contracts/StakeManager.sol
Lines 14 to 18 in 443ae25
It implies the CFE takes the n top bids which is a process of the SC
We could consider having a function to allow LPs to deposit liquidity via the Vault.
One option could be by adding an extra function. Another one could be to reuse xSwapNative
and xSwapToken
and passing a certain uint32 as the State Chain reference, leaving swapIntent
empty and using dstAddress
as the LP AccountId. In this second scenario, maybe we would need to rename the functions to something like xIngressNative
.
Adding a function only for LPs would probably make more sense, to clearly separate between swapping and providing liquidity. In my mind it should not add much overhead when it comes to bytecode size.
With a link to the audit(s) that we have completed, as well as information about how to responsibly disclose bugs.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.