fuseio / fuse-network Goto Github PK
View Code? Open in Web Editor NEWFuse network engine; Contains instructions to connect as a node
License: MIT License
Fuse network engine; Contains instructions to connect as a node
License: MIT License
Would gladly pay a small fee for this as a validator. Better than someone from infura shutting me down one day demanding 1000s of dollars.
Also, infura is bad for eth.
As discussed in telegram.
If a delegator or the staker himself removes any portion of his/her stake/delegation and falls below 100k it looks like it pulls him off the validators list and when the 100k is reached again via a delegation or stake (in the same cycle) the node gets re-added to the list with the default fee. not the fee that was set by the validator previously.
I suggest that the validator should remain on the list until the next cycle starts the snap shot used for the new cycle should asses whether he/she falls off the list. If that's possible.
Auto generated by fuse_bot based on @apohly message https://t.me/c/1410369021/9129
Update the quickstart to fix errors when run on Debian OS
When running on debian THP package has a different name so need a separate branch case for that
Trying to sign data with web3.eth.sign but getting this error:
'Error: Invalid JSON RPC response: "<html>\r\n<head><title>504 Gateway Time-out</title>'
Here is the code I am using:
const web3 = new Web3(new Web3.providers.HttpProvider('https://rpc.fuse.io'));
...
await web3.eth.sign(hash, account.address, (function(err,result){
if(!err){
console.log("Signature: "+result);
}}))
However this code runs fine and returns the correct transaction count for the account:
await web3.eth.getTransactionCount(account.address, (function(err,result){
if(!err){
console.log("Nonce: "+result);
}}));
This worked on fuse testnet.
Was parity ran using the command line option --jsonrpc-apis? According to https://openethereum.github.io/wiki/JSONRPC.html
As documented in the options, available under parity --help not all API’s are exposed by default. However you can simply enable them by running parity with the flag: --jsonrpc-apis APIS
FIP-8 adjusted the block reward by the validator's stake. But left the governance of the network untouched. This gives equal voting power for all validators, giving an incentive to run more nodes to gain more votes.
We are looking to introduce the same adjust the Voting power in the same way the block reward were adjusted. That is, validator with double stake got a double voting weight (as a double reward). Some more context can be found (again) in FIP-8.
Dear @LiorRabin , I noticed that my node is down,
looked at:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3904ed63aa4e fusenet/netstat "/home/ethnetintel/r…" 8 weeks ago Up 8 weeks fusenetstat
0dada7fbd60e fusenet/oracle "docker-entrypoint.s…" 8 weeks ago Up 18 minutes fuseoracle-sender-foreign
162e325e0802 fusenet/oracle "docker-entrypoint.s…" 8 weeks ago Up About a minute fuseoracle-affirmation-request
ed8724aee88d fusenet/oracle "docker-entrypoint.s…" 8 weeks ago Up 8 weeks fuseoracle-sender-home
24b855142374 redis:4 "docker-entrypoint.s…" 8 weeks ago Up 8 weeks 6379/tcp fuseoracle-redis
1983861817bf fusenet/oracle "docker-entrypoint.s…" 8 weeks ago Up 3 minutes fuseoracle-rewarded-on-cycle
c2eedd0a0718 fusenet/oracle "docker-entrypoint.s…" 8 weeks ago Up 4 seconds fuseoracle-initiate-change
237d50ff3f36 fusenet/oracle "docker-entrypoint.s…" 8 weeks ago Up 37 seconds fuseoracle-collected-signatures
bb772449d42b rabbitmq:3 "docker-entrypoint.s…" 8 weeks ago Up 8 weeks 4369/tcp, 5671-5672/tcp, 25672/tcp fuseoracle-rabbitmq
c196f1536ae9 fusenet/oracle "docker-entrypoint.s…" 8 weeks ago Up 3 seconds fuseoracle-signature-request
1260d94f9886 fusenet/validator-app "docker-entrypoint.s…" 8 weeks ago Up 5 hours fuseapp
03dc7fe1e327 fusenet/node "/home/parity/parity…" 8 weeks ago Restarting (2) 14 seconds ago fusenet
please, could you provide me again what files I should copy to create a new machine and debug it. An alarm on the node health would be great.
Currently netstats page only shows the bridge version number which is currently used to identify the version of all the roles. We should extend netstats and add the following:
• Role
• Version
Then have a unique version for each role which will help keep track of what software the nodes are running
There are at least 2 functions in current Fuse Token contract that are anti-decentralization and could be abused to destroy the token. In light of this it is doubtful it will ever be worth anything as investors and exchanges alike refuse to deal with centralized tokens.
The Mint function as it is now allows whoever is in control of Fusenet to create as many tokens as they wish (and therefore making it a centralized non-trustless token which is bad news) which can be used to dump an infinite amount of tokens on the market thus destroying the value of Fuse token.
Mint function can be removed since we already know the inflation rate of the token (x % of tokens over y amount of time). Inflation can be hard-coded to be done internally rather then using a non-trustless function. The generated tokens can be sent to the bridge contract to hold in reserve for validators.
Current Mint function:
/**
* @dev Internal function that mints an amount of the token and assigns it to
* an account. This encapsulates the modification of balances such that the
* proper events are emitted.
* @param account The account that will receive the created tokens.
* @param value The amount that will be created.
*/
function _mint(address account, uint256 value) internal {
require(account != 0);
_totalSupply = _totalSupply.add(value);
_balances[account] = _balances[account].add(value);
emit Transfer(address(0), account, value);
}
The second function that needs to be talked about is the burn/burnFrom function. This function is used by whoever controls Fuse token to burn tokens from ANY address. There is absolutely no reason to have this function present in the contract as it can only be used for nefarious purposes. This should be removed in its entirety because it is BAD. Nobody wants their tokens burnt without their permission.
The only way this function should ever be implemented is if ONLY the hodler were able to burn their OWN tokens (nobody should ever be able to burn tokens from an address they do not own, and this includes whoever is in control the Fuse token contract).
Current burn function:
/**
* @dev Internal function that burns an amount of the token of a given
* account.
* @param account The account whose tokens will be burnt.
* @param value The amount that will be burnt.
*/
function _burn(address account, uint256 value) internal {
require(account != 0);
require(value <= _balances[account]);
_totalSupply = _totalSupply.sub(value);
_balances[account] = _balances[account].sub(value);
emit Transfer(account, address(0), value);
}
Current burnFrom function:
/**
* @dev Internal function that burns an amount of the token of a given
* account, deducting from the sender's allowance for said account. Uses the
* internal burn function.
* @param account The account whose tokens will be burnt.
* @param value The amount that will be burnt.
*/
function _burnFrom(address account, uint256 value) internal {
require(value <= _allowed[account][msg.sender]);
// Should https://github.com/OpenZeppelin/zeppelin-solidity/issues/707 be accepted,
// this function needs to emit an event with the updated approval.
_allowed[account][msg.sender] = _allowed[account][msg.sender].sub(
value);
_burn(account, value);
}
}
Fixing these functions would require a token swap. If inflation must be changed in the future then it should require another token swap (let hodlers decide whether to engage in a token swap or dump their holdings if they do not like it).
These functions can be found in the current Fuse token contract:
https://etherscan.io/address/0x970B9bB2C0444F5E81e9d0eFb84C8ccdcdcAf84d#code
In the following code if you run it once on mainnet and then once on fusenet you will see the transaction is not doing as expected.
It should be trying to send the same amount of fuse on fusenet as eth on mainnet but it is not.
var abi = [
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"name": "from",
"type": "address"
},
{
"indexed": true,
"name": "to",
"type": "address"
},
{
"indexed": false,
"name": "value",
"type": "uint256"
}
],
"name": "Transfer",
"type": "event"
},
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"name": "owner",
"type": "address"
},
{
"indexed": true,
"name": "spender",
"type": "address"
},
{
"indexed": false,
"name": "value",
"type": "uint256"
}
],
"name": "Approval",
"type": "event"
},
{
"constant": true,
"inputs": [],
"name": "totalSupply",
"outputs": [
{
"name": "",
"type": "uint256"
}
],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"constant": true,
"inputs": [
{
"name": "account",
"type": "address"
}
],
"name": "balanceOf",
"outputs": [
{
"name": "",
"type": "uint256"
}
],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"constant": false,
"inputs": [
{
"name": "recipient",
"type": "address"
},
{
"name": "amount",
"type": "uint256"
}
],
"name": "transfer",
"outputs": [
{
"name": "",
"type": "bool"
}
],
"payable": false,
"stateMutability": "nonpayable",
"type": "function"
},
{
"constant": true,
"inputs": [
{
"name": "owner",
"type": "address"
},
{
"name": "spender",
"type": "address"
}
],
"name": "allowance",
"outputs": [
{
"name": "",
"type": "uint256"
}
],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"constant": false,
"inputs": [
{
"name": "spender",
"type": "address"
},
{
"name": "value",
"type": "uint256"
}
],
"name": "approve",
"outputs": [
{
"name": "",
"type": "bool"
}
],
"payable": false,
"stateMutability": "nonpayable",
"type": "function"
},
{
"constant": false,
"inputs": [
{
"name": "sender",
"type": "address"
},
{
"name": "recipient",
"type": "address"
},
{
"name": "amount",
"type": "uint256"
}
],
"name": "transferFrom",
"outputs": [
{
"name": "",
"type": "bool"
}
],
"payable": false,
"stateMutability": "nonpayable",
"type": "function"
},
{
"constant": false,
"inputs": [
{
"name": "spender",
"type": "address"
},
{
"name": "addedValue",
"type": "uint256"
}
],
"name": "increaseAllowance",
"outputs": [
{
"name": "",
"type": "bool"
}
],
"payable": false,
"stateMutability": "nonpayable",
"type": "function"
},
{
"constant": false,
"inputs": [
{
"name": "spender",
"type": "address"
},
{
"name": "subtractedValue",
"type": "uint256"
}
],
"name": "decreaseAllowance",
"outputs": [
{
"name": "",
"type": "bool"
}
],
"payable": false,
"stateMutability": "nonpayable",
"type": "function"
}
]
function transact(address, amount, token)
{
// send token
var myContract = web3.eth.contract(abi);
var contract_data = myContract.at(token);
var payAmount = web3.toHex(web3.toWei(amount, 'gwei'));
contract_data.transfer(address,payAmount,{from:web3.eth.accounts[0]},(function(err,result){
if(!err){
console.log(result);
}}));
}
```
@Andrew-Pohl's. suggesting that too lengthy nodes.json
files can be the cause.
Steps:
result:
still stuck pending
Expected result:
When paying enough gas while already enqueued, transaction should go through (should not still be stuck pending)
Simply handing out docker containers is not in the spirit of open source or decentralization. These things need to be posted to the public to inspect for themselves.
If the mainnet RPC is not working or incorrect the quickstart will exit silently when trying to grab the eth block. Need to make this more verbose.
Had some great results from my recent high load tests. One thing I did notice was that the max gas per block of 10mil limits the total number of transactions to a max of 472. Capping the max TPS to ~90. Can we increase this to say 100mil to allow for higher transctions per second. I believe the network can support it and would make for a great marketing claim of 500+ tps.
As of EIP170, mainnet has a maximum bytecode size of 24k. I suggest we increase this limit.
Here are some interesting reads on the subject:
https://ethereum-magicians.org/t/removing-or-increasing-the-contract-size-limit/3045
Auto generated by fuse_bot based on @APOHLY message https://t.me/c/1410369021/12173
I noticed the same issue I fixed a bug a few weeks ago where the fusenet container didn't restart on an error, after this fix when the container restarts the netstats container losses it's connection. Will look into it
Auto generated by fuse_bot based on @apohly message https://t.me/c/1410369021/9755
Add the ability to set node name from the env file
Eth gas station is limiting queries to 1000 per day which is preventing validators from doing their mainnet work. Need a new solution that does not throttle queries. Probably wouldn't hurt to slow down queries either.
There's probably no need for so much logging on each container. Can we set it to 1 * 10mb file per container.
Auto generated by fuse_bot based on @APOHLY message
The containers shouldn't be created if the name is left default. This causes two issues:
• The node_key will be the same if multiple nodes of the default names are created
• netstats confusion
PLACEHOLDER FIX IMPLEMENT
Currently mainnet blockscout is used to get the mainnet block number to use as the start of syncing. Currently this is down which results in a start block of 0. This will cause insanely high infura usage.
Will change to use the RPC to get block numbers.
No reason why we should not be able to set delegatorfee even if we arent active yet.
When the parity container restarts due to an error, the parity wrapper script over writes the template file used on the first run so when the script runs again the template is the actual config from the previous run. This results in parity throwing a redefinition error.
It become very common EIP now. We should support this
https://eips.ethereum.org/EIPS/eip-712
When creating a new account and using special characters in the passphrase, the generated JSON file is encrypted with a different password than the one that was provided, and therefore cannot be opened later.
I suspect the issue is in these lines:
ADDRESS=$(yes $PASSWORD | \
$PERMISSION_PREFIX docker run \
--interactive --rm \
--volume $CONFIG_DIR:/config/custom \
$DOCKER_IMAGE_PARITY \
--parity-args account new |\
grep -o "0x.*")
The password is fed to docker/parity through stdin so some of the characters may not pass through as is...
Add parity_set to the list of supported api in the parity wrapper here
fuse-network/scripts/parity_wrapper.sh
Line 87 in 35b313d
Some nodes - for unknown reasons - struggle to connect to peers. This is seen when:
The reasons for this are unknown.
A solution to this is to delete the nodes.json file ( found here: ./fusenet/database/FuseNetwork/network/nodes.json) and replace it with a known good one from a node with many peers.
To stop this issue happening, it would be beneficial to include a 'good' nodes.json file in the initial node set up script (quickstart.sh)
Actions:
Auto generated by fuse_bot based on @leonprou message https://t.me/c/1410369021/8437
I was running out of storage 🙈
been running a validator on 8 GB
https://studio.fuse.io/view/price
Fees may rise in the future but are capped at $0.01 per transaction
0.01$
Transaction Cost
What if the fuse price rises to 100 $ or even 1000 $ ?
Is there a specific reason why mainnet has to have the same amount of tokens as fusenet? Maybe I have an incomplete picture of how all this is working but seems to me this is a waste of resources in its current form. I mean whats wrong with burning and minting ONLY when tokens are sent over a bridge?
Example:
Someone sends 10,000 Fuse from fusenet to mainnet
Fusenet burns 10,000 tokens, mainnet mints 10,000 tokens
sender pays fee
Example 2:
Someone sends 10,000 Fuse from mainnet to fusenet
mainnet burns 10,000 tokens, fusenet mints 10,000 tokens
sender pays fee
It become very common EIP now. We should support this
https://eips.ethereum.org/EIPS/eip-712
Dear @LiorRabin ,
I had an error running the quickstart:
Download oracle docker-compose.yml
--2019-11-28 10:01:41-- https://raw.githubusercontent.com/fuseio/bridge-oracle/master/docker-compose.keystore.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.112.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.112.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5750 (5.6K) [text/plain]
Saving to: ‘docker-compose.yml’
docker-compose.yml 100%[==============================>] 5.62K --.-KB/s in 0.01s
2019-11-28 10:01:46 (407 KB/s) - ‘docker-compose.yml’ saved [5750/5750]
Please insert a password.
The password will be used to encrypt your private key. The password will additionally be stored in plaintext in /home/pi/fusenet/config/pass.pwd, so that you do not have to enter it again.
Password:
Password (again):
Generate a new account...
standard_init_linux.go:211: exec user process caused "exec format error"
read unix @->/var/run/docker.sock: read: connection reset by peer
I also noticed that I need an API key in the .env file, how should I proceed to get it?
FOREIGN_RPC_URL=https://mainnet.infura.io/v3/<YOUR_API_KEY>
We want to track certain contracts on Fuse network through Subgraphs.
For ex:
https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&view=Overview
The objective is that it should provide the TVL information for contracts created by the fuse cash account factory.
Rob let me know that the quickstart was giving errors on the hugepage call I put in to fix redis warnings. It looks like the package used to configure THP has been renamed in 20.04.
When I get a chance I'll fix this by extracting the os version from my setPlatform method.
hello, how can i get my node key?
Compiler versions jump from 5.16 to 6.1 while most openzeppelin contracts want 5.5.
In 6+ they changed the format of the pragma statement from ^[version] to >=[version], so they will not compile.
How about at least adding 5.8?
Auto generated by fuse_bot based on @apohly message https://t.me/c/1410369021/8678
Currently the bridge DBC get written a level above where the quickstart is stored so if you have multiple nodes it's easy for then to accidently use the same bridge data folder and "fight" themselves this causes a number issues mostly with syncing and folding peers. I propose we move the bridge data to be stored on the same level as the quickstart to avoid this happening.
There is no ability for node operators (validators) to return/remove tokens delegated to their node by third parties.
For example - using 100k stake requirement.
Validator stakes 70k
Delegator stakes 30k
Validator Fee is 10%
After some time the validator wants to put in 30k tokens to have 100% of the node. They have to ask the delegator to remove their delegated tokens first, before staking their 30k tokens.
This may be possible where the validator knows the delgator.
However, in a full decentralised network, validators will not know their delegators and be unable to remove their tokens from their nodes. They will therefore be unable to manage their nodes fully.
This may not be a significant issue regarding validator rewards, as the validator could increase their fee to 100%. People will usually remove their tokens by choice.
However, other wallets may have been forgotten about, passwords lost etc. These would never be removed.
Also, it would mean validators would not be able to comply with legal requests to stop using 'illegal wallets'. The validator would not be able to do so without taking the whole node offline - which is a way to resolve the issue, but not a very elegant one.
For scalability the validator info needs to be stored on chain at the point of creating a node.
Proposed solution:
Create a new contract "validator info contract" - this avoids changing the consensus contract. This new contract will store the names and contact info (telegram username, email, websites and so on) on chain. The fuseapp will be modified to take additional input variables for these names. The fuseapp will then write the info to the contract on start. To avoid excessive changes the sites which currently grab validator data (staking) can continue to function in the same way (by calling the bot api endpoints). But the bot will look for changes in the validator info periodically and update it's databases accordingly.
Trying to run fuse with quickstart and I am getting this error:
[2019-08-14 23:43:45.826 +0000] INFO (1 on e87579861d1b): runMain [2019-08-14 23:43:45.826 +0000] INFO (1 on e87579861d1b): emitInitiateChange [2019-08-14 23:43:47.029 +0000] INFO (1 on e87579861d1b): block #283460 currentCycleEndBlock: 283531 shouldEmitInitiateChange: true [2019-08-14 23:43:47.029 +0000] INFO (1 on e87579861d1b): 0x0XXXXXXXXXXXXXXXXXXXXXX sending emitInitiateChange transaction [2019-08-14 23:43:47.704 +0000] ERROR (1 on e87579861d1b): Insufficient funds. The account you tried to send transaction from does not have enough funds. Required 1100000000000000 and got: 0. Error: Insufficient funds. The account you tried to send transaction from does not have enough funds. Required 1100000000000000 and got: 0.
when I run npm install, it returns
7176 error Permission denied (publickey),
7176 error fatal: Could not read from remote repository.7176 error
7176 error Please make sure you have the correct access rights
7176 error and the repository exists.
7177 verbose exit 128
Is it means I don't have the permission to install fuse?
Lowest possible value is 1. No reason 0 should not be allowed.
It is currently observed that the end of cycle contract call on the Eth network is consuming a lot of gas. Could we change the speed parameters for gas price calculations from ethgasstation API from fast to average to help reduce the transaction cost.
I think it would be neater and easier to manage if the git repos were moved around and we take advantage of subrepos.
The fuse network repos which the nodes/validators run could be arranged as follows
FuseNetwork-Master
|> bridge_subrepo
|> parity_subrepo
|> fuseapp_subrepo
|> netstats_subrepo
The master repo can then point to the subrepos meaning fewer clones, easier traceability and easier linkage between the components.
From https://github.com/Andrew-Pohl: Gas usage on the executeNewSetSignature call (on mainnet contract) at the end of cycle seems to be increasing each day. My knowledge of contracts are limited (at best) looking at transactions 3months ago we were using roughly 500k gas. 2 weeks ago we were at 1.2million and yesterday 2.4million (doubled in two weeks). I noticed the inputs are increasing in length is this as more validators are coming?. It's currently resulting with the current high eth network load about $11 a day! (this is with the changes to use average gas price)
example from 2 weeks ago - https://etherscan.io/tx/0xe771fc5b66492f30b19dde8a9c9b78e26a64c3dca8e2174db4f9011650016a5f
example from end of the last cycle - https://etherscan.io/tx/0x9042ec8d95614f61c0eaae250c241c572f3fb958752260d97f0ce3eff6c5bd76
I noticed that new nodes either never sync or sit waiting for ages to sync. After a bit of digging I noticed that by default parity nodes only allow a max of 64 connections by default.
(https://helpmanual.io/help/parity/ see --max-pending-peers)
I will fix this later by upping it 128 should do for now and we will have to restart the boot nodes.
Auto generated by fuse_bot based on @shaimo1000 message https://t.me/c/1410369021/8683
I thought this issue was fixed, but apparently it was not: when using the script to create a new account and the password contains some special characters it seems that the account is created with the wrong password. The password which is getting saved in the .env is incorrect and would not work with the generated json file...
There are usually 2 end of cycle txs on mainnet. 1 to mine fuse tokens (@~41000/day) and 1 to update the new validator sigs.
Since the change to a ~48hours cycle time these mainnet txs are either not being created, or only partially so.
Additionally, validator nodes are seeing a huge spike in eth_calls. I am assuming this is only occurring in the nodes that were chosen to complete the end of cycle txs.
There appears to be something wrong with the end of cycle process that is causing the txs to fail, and for the node to produce massive amounts of eth_calls to mainnet.
This error has only been seen since the change to 48hour cycle times.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.