GithubHelp home page GithubHelp logo

stakewars-iii's Introduction

Welcome to Stake Wars: Episode III A New Validator

Stake Wars is a program that helps the community become familiar with what it means to be a NEAR validator, and gives the community an early chance to access the chunk-only producer.

Stake Wars offers rewards that support new members who want to join mainnet as a validator starting from the end of September 2022.

Hope the force will be with you ! https://github.com/near/stakewars-iii/blob/main/challenges/001.md

For Support and important announcements: https://discord.gg/7TercRzRgA

stakewars-iii's People

Contributors

alannetwork avatar alextothemoon avatar bowenwang1996 avatar ddealmeida avatar dongcool avatar doulos819 avatar dwrx avatar edwardsvo avatar gmilescu avatar heroes-bounty[bot] avatar htafolla avatar joesixpack avatar linguists avatar mina86 avatar mm-near avatar moshenskydv avatar nikurt avatar ok-everstake avatar paulmattei avatar platonicsocrates avatar posvyatokum avatar ropadalka avatar sangtn0102 avatar scholtz avatar secord0 avatar sneerbol avatar stiavnik avatar vai00 avatar vm-ev avatar yantodotid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stakewars-iii's Issues

Shardnet Uptime Scoreboard Needs Resetting

Newer validators have superior uptime because there is significantly less epochs to average the uptime over, especially excluding the three hard-forks.

I suggest resetting the dashboard on every hard-fork for fairness.

Cannot destructure property 'result' of 'response' as it is null.

Retrying request to broadcast_tx_commit as it has timed out [
'EwAAAG91ODEyLnNoYXJkbmV0Lm5lYXIAHAteLkoudr+9X/Y3TAKlZ3V18HADs1yYjNPvUTBVFRaIlSec0QAAABwAAABvdTgxMjIuZmFjdG9yeS5zaGFyZG5ldC5uZWFyo0/Vb+fao5IVU6G0Y2SaT6ZRJ/AGZDdb2TdbXAnwEpQBAAAAAgQAAABwaW5nAgAAAHt9ADDvfboCAAAAAAAAAAAAAAAAAAAAAAAAANot4Obs8ZPaItQCgzn136WD3x8dVaAnWWx4X7vJJVzckTtyMSZne12JJGpvHCTKto5Bcwk1mQXI8Fk7m/sQXAo='
]
TypeError: Cannot destructure property 'result' of 'response' as it is null.
at JsonRpcProvider.sendJsonRpc (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/providers/json-rpc-provider.js:345:17)
at async /home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/account.js:121:24
at async Object.exponentialBackoff [as default] (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/utils/exponential-backoff.js:7:24)
at async Account.signAndSendTransactionV2 (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/account.js:117:24)
at async scheduleFunctionCall (/home/j/.npm-global/lib/node_modules/near-cli/commands/call.js:57:38)
at async Object.handler (/home/j/.npm-global/lib/node_modules/near-cli/utils/exit-on-error.js:52:9) {
context: ErrorContext {
transactionHash: '7w6EutK1a6gYQGR3ff42dCWmsLjFuVomw6TWHT8aTGZ7'
}
}

Challenge 008: withdraw() failed, but tx can be found in a few epochs

Background:
I was working on Challenge 008 and faced an issue where withdraw() failed with error, but in few hours TX can be found and
get_accounts () returns unstaked amount != 0.

**Explorer: **

Details:
In pasteBin https://pastebin.com/Kc8bvF5u

pool was in proposals, but disappeared completely in new epoch

Hi,
twice my pool was in proposals, but it completely disappeared in new epoch.
I don't know the reason. Can someone help please?

My poolid: stefnet.factory.shardnet.near

I've balance on my account:
$ near view stefnet.factory.shardnet.near get_account_total_balance '{"account_id": "nowackis.shardnet.near"}' --node_url=http://localhost:3030
View call: stefnet.factory.shardnet.near.get_account_total_balance({"account_id": "nowackis.shardnet.near"})
'300052058689955906382958305'

challange 6 - pings

i wonder what is the deliverable for for example challange 6.. the txs on the network or the log file?

btw should we turn of the cron jobs? i think that rpc is down because there is ddos of 100-200 servers trying to do txs there each 5 minutes with 10 tries each..

What is the exact meaning of `known_producers` returned by the RPC `network_info` call

By doing and RPC call with network_info I can get my peers and something called known_producers.

Call and parsing

http post http://127.0.0.1:3030 jsonrpc=2.0 method=network_info params:='[]' id=dontcare | node parse-network-info.js

If I parse the resultings data with my script, I can find which of muy peers are NOT in the known_producers set:

All peers
| Key | IPAddr | Pool |
| ed25519:7y58am88xR2yV7jtSwXgoBG32w9Jgha7bWnebLLPcGwt | 46.101.209.157:24567 | 
| ed25519:4158mxL6ZAnzWfAMGNstu8JHRRFfnLJvg5nBhoqJ92mv | 35.245.107.6:24567 | 
| ed25519:HvfoXtVw6cjyUzFZy7R3dRu6w8DZifE9HobrXPbkoA8W | 159.223.140.232:24567 | 
| ed25519:DKdeJPTp5tVb4F1t5kGkCRZuJ1a3bCABbPVBnCmS8f1a | 38.242.230.58:24567 | akajul.factory.shardnet.near
| ed25519:DfkqfKBkk89YXUQsJMJ1gKAEPZGiqKgXkJ6PNnoSCoaS | 149.102.157.75:24567 | 
| ed25519:8x8fn4VdcAsRr5j6bTbUYMcWHyP5bzswo1ctfycboXss | 149.102.137.74:24567 | 
| ed25519:28FzdG7y49Rp5w7zdgNXiqBkD7t2d1kjD5dA9uxDNZ5K | 104.200.20.46:24567 | ldavid2046.factory.shardnet.near
| ed25519:H4RBvrStUAioXBjCWWLsfH3zW5H5RFnKLyxGUhgaVYZT | 185.229.119.254:24567 | 
| ed25519:FkLRjt35CE1uufsPoj2PqA6eZDiCZUYwQdAv2dVn8Skc | 209.126.80.249:24567 | 
| ed25519:BkCJc1a6DbKgjkFG9HTsTMk9efMFx2s7k7QNVXkT1n9C | 49.12.240.110:24567 | gladkyi.factory.shardnet.near
| ed25519:F816uJp8735jYUSv2nWDmX9JyMUmzpZYB2z45Btg5orm | 5.9.56.59:24567 | 
| ed25519:ZKknQuQr5qsvNLRQ4qorZhN3drvojxag1puE2Z32rYk | 142.93.59.46:24567 | 
| ed25519:6QqAweBFJhsA6cqr54VfDSJ1vPC3Vq1jq1kT2o3XToox | 45.140.185.206:24567 | emperor_roman.factory.shardnet.near
| ed25519:9zo9c6vDbRRi4Npf53ifKWiwSyELVwwf4DyExZLULdHz | 86.48.5.204:24567 | 
| ed25519:2JGv4yF5GYHX4PKqzRhdwSqzH7fuqJG4H9v8wALp4PEj | 206.81.13.131:24567 | 
| ed25519:GEZ5jz4yCSqHTqRCSxKcCZA68hSGG9bVx8EwcU1CMD3Z | 194.5.152.23:24567 | 
| ed25519:73xFCJ8WzN6PBAG5tufnjjoEPoFNzD9zS8RfyRJdcDLF | 109.107.179.135:24567 | 
| ed25519:CQZFJuRDPLcXLGrzexVH2L4Usv1iwhJiakTawuVf9tQ5 | 65.108.220.111:24567 | enka.factory.shardnet.near
| ed25519:HHeEf5hUD7yYu7s5RFm2p3rAmiL53cuZYK6ADBp3CTNK | 209.126.1.192:24567 | 
| ed25519:A8YhdNaTJKAiYYXtCZcm6QpURzkbDa2U9ofF7Rez7WAp | 45.83.94.11:24567 | mynear.factory.shardnet.near
| ed25519:2wUgvBxxRrgtNftJyb3GVbxoeGV8sLi95fppSoKsRaDU | 213.133.99.24:24567 | plap.factory.shardnet.near
| ed25519:Hmvu6XmWJmK6fntJgUiTAsZqxuoToEyrTj1usMm3Y9pt | 194.163.130.73:24567 | david3073.factory.shardnet.near
| ed25519:5eBQCACCWLy4JrkUXhGWma3DmbQZbhCVS9QxXBUx4q5c | 154.26.129.220:24567 | 1lvpool.factory.shardnet.near
| ed25519:JBdbTt2D6cTYwF3KqYHDsKwF3FX2quRZoYr9XYXKVJvd | 109.107.183.30:24567 | purple_pool.factory.shardnet.near
| ed25519:GcgbFBVinVSmgWKvttthLpjjq6neSDrMuiFvCh3n5GRr | 88.198.66.148:24567 | gateomega_pool.factory.shardnet.near
| ed25519:4n7gzmoQMmbMDmzTR3bqyPEcBUmjh7NFZnzKgcAsDm2M | 65.108.43.51:24567 | mus.factory.shardnet.near
| ed25519:C6c5ugmpq7MMxFViWmokSECaqVyN2NwTfmJUF779S6va | 86.48.5.195:24567 | 
| ed25519:Czq9Lp5n2qPdS3fdBBaBRDGqm9MdDzuwxZvZdpHVQK7T | 149.102.158.166:24567 | 
| ed25519:Kjr4EZZjCcGTEFsK8t2f3Gt5y1W1Zc9rCePrKkGvJ5x | 161.97.94.252:24567 | 
| ed25519:5wgBmXDoZJwgYNxyVckJexibFT2FjdsWK2HcGtdtvNto | 178.210.209.31:24567 | 
| ed25519:Hgscg51nPkbRqF4Dg9XzQn3ubkZEM4CDY5skYfuCwF4X | 157.245.140.41:24567 | drags.factory.shardnet.near

Producers
| Key | IPAddr | Pool |
| ed25519:DKdeJPTp5tVb4F1t5kGkCRZuJ1a3bCABbPVBnCmS8f1a | 38.242.230.58:24567 | akajul.factory.shardnet.near
| ed25519:28FzdG7y49Rp5w7zdgNXiqBkD7t2d1kjD5dA9uxDNZ5K | 104.200.20.46:24567 | ldavid2046.factory.shardnet.near
| ed25519:BkCJc1a6DbKgjkFG9HTsTMk9efMFx2s7k7QNVXkT1n9C | 49.12.240.110:24567 | gladkyi.factory.shardnet.near
| ed25519:6QqAweBFJhsA6cqr54VfDSJ1vPC3Vq1jq1kT2o3XToox | 45.140.185.206:24567 | emperor_roman.factory.shardnet.near
| ed25519:CQZFJuRDPLcXLGrzexVH2L4Usv1iwhJiakTawuVf9tQ5 | 65.108.220.111:24567 | enka.factory.shardnet.near
| ed25519:A8YhdNaTJKAiYYXtCZcm6QpURzkbDa2U9ofF7Rez7WAp | 45.83.94.11:24567 | mynear.factory.shardnet.near
| ed25519:2wUgvBxxRrgtNftJyb3GVbxoeGV8sLi95fppSoKsRaDU | 213.133.99.24:24567 | plap.factory.shardnet.near
| ed25519:Hmvu6XmWJmK6fntJgUiTAsZqxuoToEyrTj1usMm3Y9pt | 194.163.130.73:24567 | david3073.factory.shardnet.near
| ed25519:5eBQCACCWLy4JrkUXhGWma3DmbQZbhCVS9QxXBUx4q5c | 154.26.129.220:24567 | 1lvpool.factory.shardnet.near
| ed25519:JBdbTt2D6cTYwF3KqYHDsKwF3FX2quRZoYr9XYXKVJvd | 109.107.183.30:24567 | purple_pool.factory.shardnet.near
| ed25519:GcgbFBVinVSmgWKvttthLpjjq6neSDrMuiFvCh3n5GRr | 88.198.66.148:24567 | gateomega_pool.factory.shardnet.near
| ed25519:4n7gzmoQMmbMDmzTR3bqyPEcBUmjh7NFZnzKgcAsDm2M | 65.108.43.51:24567 | mus.factory.shardnet.near
| ed25519:Hgscg51nPkbRqF4Dg9XzQn3ubkZEM4CDY5skYfuCwF4X | 157.245.140.41:24567 | drags.factory.shardnet.near

Non producers (don't have a Pool ID)
| Key | IPAddr | Pool |
| ed25519:7y58am88xR2yV7jtSwXgoBG32w9Jgha7bWnebLLPcGwt | 46.101.209.157:24567 | 
| ed25519:4158mxL6ZAnzWfAMGNstu8JHRRFfnLJvg5nBhoqJ92mv | 35.245.107.6:24567 | 
| ed25519:HvfoXtVw6cjyUzFZy7R3dRu6w8DZifE9HobrXPbkoA8W | 159.223.140.232:24567 | 
| ed25519:DfkqfKBkk89YXUQsJMJ1gKAEPZGiqKgXkJ6PNnoSCoaS | 149.102.157.75:24567 | 
| ed25519:8x8fn4VdcAsRr5j6bTbUYMcWHyP5bzswo1ctfycboXss | 149.102.137.74:24567 | 
| ed25519:H4RBvrStUAioXBjCWWLsfH3zW5H5RFnKLyxGUhgaVYZT | 185.229.119.254:24567 | 
| ed25519:FkLRjt35CE1uufsPoj2PqA6eZDiCZUYwQdAv2dVn8Skc | 209.126.80.249:24567 | 
| ed25519:F816uJp8735jYUSv2nWDmX9JyMUmzpZYB2z45Btg5orm | 5.9.56.59:24567 | 
| ed25519:ZKknQuQr5qsvNLRQ4qorZhN3drvojxag1puE2Z32rYk | 142.93.59.46:24567 | 
| ed25519:9zo9c6vDbRRi4Npf53ifKWiwSyELVwwf4DyExZLULdHz | 86.48.5.204:24567 | 
| ed25519:2JGv4yF5GYHX4PKqzRhdwSqzH7fuqJG4H9v8wALp4PEj | 206.81.13.131:24567 | 
| ed25519:GEZ5jz4yCSqHTqRCSxKcCZA68hSGG9bVx8EwcU1CMD3Z | 194.5.152.23:24567 | 
| ed25519:73xFCJ8WzN6PBAG5tufnjjoEPoFNzD9zS8RfyRJdcDLF | 109.107.179.135:24567 | 
| ed25519:HHeEf5hUD7yYu7s5RFm2p3rAmiL53cuZYK6ADBp3CTNK | 209.126.1.192:24567 | 
| ed25519:C6c5ugmpq7MMxFViWmokSECaqVyN2NwTfmJUF779S6va | 86.48.5.195:24567 | 
| ed25519:Czq9Lp5n2qPdS3fdBBaBRDGqm9MdDzuwxZvZdpHVQK7T | 149.102.158.166:24567 | 
| ed25519:Kjr4EZZjCcGTEFsK8t2f3Gt5y1W1Zc9rCePrKkGvJ5x | 161.97.94.252:24567 | 
| ed25519:5wgBmXDoZJwgYNxyVckJexibFT2FjdsWK2HcGtdtvNto | 178.210.209.31:24567 | 

Questions:

  • That means that those nodes are not producing any chunks and so are interfering with the known producers peers ?
  • Should/coluld I blacklist them so they don't generate extra traffic to my node ?
  • What is the blacklisting format ? See issue #83
  • Is it better to use a "whitelist" with known producers rather than blacklisting ?

Thanks !

Blockchain sync has stuck

INFO stats: # 1438750 Downloading blocks 4.16% (461236 left; at 1438750) 29 peers ⬇ 4.07 MB/s ⬆ 6.81 MB/s 0.00 bps 0 gas/s CPU: 59%, Mem: 7.81 GB
INFO stats: # 1438750 Downloading blocks 4.16% (461239 left; at 1438750) 30 peers ⬇ 4.00 MB/s ⬆ 12.0 MB/s 0.00 bps 0 gas/s CPU: 56%, Mem: 8.32 GB
INFO stats: # 1438750 Downloading blocks 4.16% (461243 left; at 1438750) 30 peers ⬇ 4.29 MB/s ⬆ 11.9 MB/s 0.00 bps 0 gas/s CPU: 45%, Mem: 7.38 GB
INFO stats: # 1438750 Downloading blocks 4.16% (461246 left; at 1438750) 30 peers ⬇ 4.14 MB/s ⬆ 6.58 MB/s 0.00 bps 0 gas/s CPU: 48%, Mem: 7.69 GB
INFO stats: # 1438750 Downloading blocks 4.16% (461251 left; at 1438750) 30 peers ⬇ 4.47 MB/s ⬆ 11.9 MB/s 0.00 bps 0 gas/s CPU: 66%, Mem: 8.31 GB

After the restart of the neard service:

WARN sync: Block sync: 1438750/1900059 No available archival peers to request block
....

HEAD detached at 68bfa84ed
Tried to delete blockchain and resync from scratch.

Extra `}` char in create_staking_pool call

Extra } char in command:
Now:

near call factory.shardnet.near create_staking_pool '{"staking_pool_id": "<pool id>", "owner_id": "<accountId>", "stake_public_key": "<public key>", "reward_fee_fraction": {"numerator": 5, "denominator": 100}, "code_hash":"DD428g9eqLL8fWUxv8QSpVFzyHi1Qd16P8ephYCTmMSZ"}}' --accountId="<accountId>" --amount=30 --gas=300000000000000

Must be:

near call factory.shardnet.near create_staking_pool '{"staking_pool_id": "<pool id>", "owner_id": "<accountId>", "stake_public_key": "<public key>", "reward_fee_fraction": {"numerator": 5, "denominator": 100}, "code_hash":"DD428g9eqLL8fWUxv8QSpVFzyHi1Qd16P8ephYCTmMSZ"}' --accountId="<accountId>" --amount=30 --gas=300000000000000

Local RPC Timing Out

Starting to see this issue crop up on more powerful machines, not just low end.

image

Ping error - operation cost

Ping does not go through with error of not enough balance though I have balance on my account

near call stingray.factory.shardnet.near ping '{}' --accountId stingray.shardnet.near --gas=300000000000000
Scheduling a call: stingray.factory.shardnet.near.ping({})
Doing account.functionCall()
ServerError: Sender stingray.shardnet.near does not have enough balance 26.026831465460946155827066 for operation costing 37.854480396494967545665848
    at Object.parseRpcError (/usr/lib/node_modules/near-cli/node_modules/near-api-js/lib/utils/rpc_errors.js:24:19)
    at /usr/lib/node_modules/near-cli/node_modules/near-api-js/lib/providers/json-rpc-provider.js:319:44
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.exponentialBackoff [as default] (/usr/lib/node_modules/near-cli/node_modules/near-api-js/lib/utils/exponential-backoff.js:7:24)
    at async JsonRpcProvider.sendJsonRpc (/usr/lib/node_modules/near-cli/node_modules/near-api-js/lib/providers/json-rpc-provider.js:304:26)
    at async /usr/lib/node_modules/near-cli/node_modules/near-api-js/lib/account.js:121:24
    at async Object.exponentialBackoff [as default] (/usr/lib/node_modules/near-cli/node_modules/near-api-js/lib/utils/exponential-backoff.js:7:24)
    at async Account.signAndSendTransactionV2 (/usr/lib/node_modules/near-cli/node_modules/near-api-js/lib/account.js:117:24)
    at async scheduleFunctionCall (/usr/lib/node_modules/near-cli/commands/call.js:57:38)
    at async Object.handler (/usr/lib/node_modules/near-cli/utils/exit-on-error.js:52:9) {
  type: 'NotEnoughBalance',
  context: ErrorContext {
    transactionHash: 'BNiXN3vgcVAHzSBW3mEAukuEHrEF4efYm7FiSwJmNc5r'
  },
  balance: '26026831465460946155827066',
  cost: '37854480396494967545665848',
  signer_id: 'stingray.shardnet.near',
  kind: {
    balance: '26026831465460946155827066',
    cost: '37854480396494967545665848',
    signer_id: 'stingray.shardnet.near'
  }
}

image

Confused challenge 6

the crontab code has been changed to 2 hours. But the acceptance criteria still remains 5 minutes, so which one should I do

maybe version 2565 ignores the parameter network.public_addrs

# ./neard_2565 --home ./ run
2022-08-12T13:41:32.574918Z INFO neard: version="trunk" build="1.1.0-2565-g68bfa84ed" latest_protocol=100
2022-08-12T13:41:32.575025Z WARN neard: ./config.json: encountered unrecognised field: network.public_addrs
2022-08-12T13:41:38.269611Z INFO db: Created a new RocksDB instance. num_instances=1
2022-08-12T13:41:38.270093Z INFO db: Dropped a RocksDB instance. num_instances=0
2022-08-12T13:41:38.270104Z INFO near: Opening RocksDB database path=./data
2022-08-12T13:41:38.377674Z INFO db: Created a new RocksDB instance. num_instances=1

# ./neard_2580 --home ./ run
2022-08-12T13:42:26.562549Z INFO neard: version="trunk" build="1.1.0-2580-g78ef2f558" latest_protocol=100
2022-08-12T13:42:32.170623Z INFO db: Created a new RocksDB instance. num_instances=1
2022-08-12T13:42:32.171106Z INFO db: Dropped a RocksDB instance. num_instances=0
2022-08-12T13:42:32.171117Z INFO near: Opening RocksDB database path=./data
2022-08-12T13:42:32.268764Z INFO db: Created a new RocksDB instance. num_instances=1

code: 104 - Peer stream error

This error pops up from time to time. At the same time in the logs with " | grep INFO" you can see that the connection of peers is present, they are steadily 20+. Also periodically pops up an error with sending a message to mailbox
image

I still haven't found a solution to this problem

Possible memory leak

When I started node 2 days ago the mem usage was ~ 1 gb, now it sits at 3.5 gb. Looks like there is some memory leak in the node software, or it's expected due to increased amount of nodes?

Faulty validators

With debug logs enabled:

logs-from-stakewars-2-nonarchival-deployment-in-stakewars-2-nonarchival-deployment-5bdf88d6c8-b8mrn.log

AccountNotFound.txt

List of faulty validators:

dwrx.factory.shardnet.near 
vkoval.factory.shardnet.near 
burdayl.factory.shardnet.near 
alien.factory.shardnet.near 
fairylovehn127.factory.shardnet.near 
nikitinslava.factory.shardnet.near 
vixello.factory.shardnet.near 
ttimmatti_pool.factory.shardnet.near 
ngobakha.factory.shardnet.near 
space.factory.shardnet.near 
legrev.factory.shardnet.near 
nearmoon01.factory.shardnet.near 
tco_node1.factory.shardnet.near 

The message drop because of reason: AccountNotFound

2022-08-05T10:29:10.293241Z DEBUG network: Drop message account_id=Some(AccountId("scholtz.factory.shardnet.near")) to=AccountId("ngobakha.factory.shardnet.near") reason=AccountNotFound msg=PartialChunkRequest(ChunkHash(`9Sffuo4a4jV6hGutNBJVyFVrv7mxcyhrtVmT7nmJLVt7`), [56])

2022-08-05T10:29:10.293316Z DEBUG network: Failed sending message to=ed25519:gKZFUrc3otdNWGAX92yYdkGEfpYkk7EB1dZXX68eJ6S num_connected_peers=13 Routed(RoutedMessageV2 { msg: RoutedMessage { target: PeerId(ed25519:A3jWX9G2uPgGEH8MTDwfXwDEpKDP2UdGB7oDnnS8ERN3), author: ed25519:CXT8aJfR9dRgeC4tYMvACRgb3upRVAM2BqGcjLSYj9nL, signature: ed25519:452dcce1aXg4YH5jSs3mdF4xFqupYShieczo1hT59TqoUv3b9Km4bUwA4KXoKWrXgjboKQeyqSuHF4dGM6NtYdRY, ttl: 100, body: PartialChunkRequest(ChunkHash(`9Sffuo4a4jV6hGutNBJVyFVrv7mxcyhrtVmT7nmJLVt7`), [56]) }, created_at: Some(OffsetDateTime { utc_datetime: PrimitiveDateTime { date: Date { year: 2022, ordinal: 217 }, time: Time { hour: 10, minute: 29, second: 10, nanosecond: 293287903 } }, offset: UtcOffset { hours: 0, minutes: 0, seconds: 0 } }) })

Minimum Storage Cost Continuously Increasing

j@vulpecula:~$ ./near-withdrawall.sh
Scheduling a call: ou8122.factory.shardnet.near.withdraw_all()
Doing account.functionCall()
ServerError: The account  wouldn't have enough balance to cover storage, required to have 1343917603077231960294090 yoctoNEAR more
    at Object.parseRpcError (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/utils/rpc_errors.js:24:19)
    at /home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/providers/json-rpc-provider.js:319:44
    at processTicksAndRejections (internal/process/task_queues.js:95:5)
    at async Object.exponentialBackoff [as default] (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/utils/exponential-backoff.js:7:24)
    at async JsonRpcProvider.sendJsonRpc (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/providers/json-rpc-provider.js:304:26)
    at async /home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/account.js:121:24
    at async Object.exponentialBackoff [as default] (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/utils/exponential-backoff.js:7:24)
    at async Account.signAndSendTransactionV2 (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/account.js:117:24)
    at async scheduleFunctionCall (/home/j/.npm-global/lib/node_modules/near-cli/commands/call.js:57:38)
    at async Object.handler (/home/j/.npm-global/lib/node_modules/near-cli/utils/exit-on-error.js:52:9) {
  type: 'LackBalanceForState',
  context: ErrorContext {
    transactionHash: 'Fg7EiQ6DLhd3WjgiUsghha7mpjAhKNbEcFUDr1WwsyLE'
  },
  account_id: undefined,
  amount: '1343917603077231960294090',
  kind: {
    amount: '1343917603077231960294090',
    signer_id: 'ou812.shardnet.near'
  }
}

Half an hour later:

./near-withdrawall.sh
Scheduling a call: ou8122.factory.shardnet.near.withdraw_all()
Doing account.functionCall()
ServerError: The account  wouldn't have enough balance to cover storage, required to have 1664538455687186599190127 yoctoNEAR more
    at Object.parseRpcError (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/utils/rpc_errors.js:24:19)
    at /home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/providers/json-rpc-provider.js:319:44
    at processTicksAndRejections (internal/process/task_queues.js:95:5)
    at async Object.exponentialBackoff [as default] (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/utils/exponential-backoff.js:7:24)
    at async JsonRpcProvider.sendJsonRpc (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/providers/json-rpc-provider.js:304:26)
    at async /home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/account.js:121:24
    at async Object.exponentialBackoff [as default] (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/utils/exponential-backoff.js:7:24)
    at async Account.signAndSendTransactionV2 (/home/j/.npm-global/lib/node_modules/near-cli/node_modules/near-api-js/lib/account.js:117:24)
    at async scheduleFunctionCall (/home/j/.npm-global/lib/node_modules/near-cli/commands/call.js:57:38)
    at async Object.handler (/home/j/.npm-global/lib/node_modules/near-cli/utils/exit-on-error.js:52:9) {
  type: 'LackBalanceForState',
  context: ErrorContext {
    transactionHash: '7GYcwGJUsUZckwywZjtRR2vLvYkXsctrFJjqFxnD6ABj'
  },
  account_id: undefined,
  amount: '1664538455687186599190127',
  kind: {
    amount: '1664538455687186599190127',
    signer_id: 'ou812.shardnet.near'
  }
}

Problem with the crontab

I have set up a cron task. The ping transaction doesn't show up. I created a file called ping.sh. The path is correct. Please. Can you please tell me what I can try to do?

Node is getting OOM'ed without a valid reason

So I am running a shardnet fullnode. Faced this issue two times happening a few hours after the upgrade each time: the node is using all the RAM and CPU, then is getting OOM'ed. Here are two Grafana graphs: RAM usage:
image
and loadavg1 / cores:
image
I didn't do anything at that time with the server, so it's unlikely it's something I did, seems like it's either the chain issue or my hardware being not powerful enough.

The logs are almost 100% this (or similar, with the same pattern: <number <string> in progress for <time>s orphan for <time>s Chunks:(....)):
image

I am using Contabo VPS S for hosting a fullnode, I've already ordered an upgrade but I wonder if there may be other things I've missed that are causing this. Can you help?

Thanks a lot in advance.

Missing chunk after update lastest code

Hello,

I have just updated my node:

sudo systemctl stop neard
cd ~/nearcore
git fetch
git checkout 68bfa84ed1455f891032434d37ccad696e91e4f5
cargo build -p neard --release --features shardnet
sudo systemctl start neard

But it gets error: Missing chunk....

image

P/s: My vps: 4vCpu, 8GB RAM, 160GB SSD - Ubuntu 22.04

Have a bug Login

Hi there! I want update staking pool key:

near call pool1.factory.shardnet.near update_staking_key '{"stake_public_key": "ed25519:9cSnQeBpJjnoXSEwxdKqZ9BMb53aKT5mi7VYVY6cFYtD"}' --accountId spirit_animal.shardnet.near

But have an error:

Receipt: CLRQFehae9cEQjoD8jm5kHVmhTQHvf6WxePorvkwPXBm
        Failure [pool1.factory.shardnet.near]: Error: {"index":0,"kind":{"ExecutionError":"Smart contract panicked: panicked at 'assertion failed: `(left == right)`\n  left: `AccountId(\"spirit_animal.shardnet.near\")`,\n right: `AccountId(\"jhnglt.shardnet.near\")`: Can only be called by the owner', staking-farm/src/owner.rs:148:9"}}
ServerTransactionError: {"index":0,"kind":{"ExecutionError":"Smart contract panicked: panicked at 'assertion failed: `(left == right)`\n  left: `AccountId(\"spirit_animal.shardnet.near\")`,\n right: `AccountId(\"jhnglt.shardnet.near\")`: Can only be called by the owner', staking-farm/src/owner.rs:148:9"}}
    at Object.parseResultError (/usr/lib/node_modules/near-cli/node_modules/near-api-js/lib/utils/rpc_errors.js:31:29)
    at Account.signAndSendTransactionV2 (/usr/lib/node_modules/near-cli/node_modules/near-api-js/lib/account.js:160:36)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async scheduleFunctionCall (/usr/lib/node_modules/near-cli/commands/call.js:57:38)
    at async Object.handler (/usr/lib/node_modules/near-cli/utils/exit-on-error.js:52:9) {
  type: 'FunctionCallError',
  context: undefined,
  index: 0,
  kind: {
    ExecutionError: "Smart contract panicked: panicked at 'assertion failed: `(left == right)`\n" +
      '  left: `AccountId("spirit_animal.shardnet.near")`,\n' +
      ' right: `AccountId("jhnglt.shardnet.near")`: Can only be called by the owner\', staking-farm/src/owner.rs:148:9'
  },
  transaction_outcome: {
    block_hash: '98KvSDUWLv4mTccV1DxdnqujvXu6wAcVaFLv8xU7Zj4H',
    id: '2gpHobQXsXLTFZPfQ6VF127C1wXMeEnRkYWUvxyT92Af',
    outcome: {
      executor_id: 'spirit_animal.shardnet.near',
      gas_burnt: 2428128941862,
      logs: [],
      metadata: [Object],
      receipt_ids: [Array],
      status: [Object],
      tokens_burnt: '2428128941862000000000'
    },
    proof: [ [Object], [Object], [Object], [Object] ]
  }
}

Here is i see another account jhnglt.shardnet.near I dont registred this account
My pool is:
pool1.factory.shardnet.near
My account/wallet is:
spirit_animal.shardnet.near

Pleace, look in the history of transactions of this accounts:
https://explorer.shardnet.near.org/accounts/jhnglt.shardnet.near
https://explorer.shardnet.near.org/accounts/spirit_animal.shardnet.near

My account spirit_animal.shardnet.near has transactions only to pool1.factory.shardnet.near
But account jhnglt.shardnet.near has no transaction and has nothing to do with my pool. How is it possible?

Observations: Node failed to catch up after adding network.public_addrs

So.. my node was working well for the entire day (good uptime, normal CPU usage).

Then I got the announcement to add network.public_addrs to the config, which I did.

ISSUE 1

After restarting the node, I noticed this in the log:

encountered unrecognised field: network.public_addrs

So it looked like the node didn't understand the config. I'm sure there's no typo as I copied & pasted directly from the instruction. It's the literal string public_addrs

ISSUE 2

After restarting the node, I expected that it'd take maximum a minute for the node to catch up & continue validating.. But I was wrong and/or very unlucky. My node never caught up afterwards:

node-fails-to-catchup

I tried restarting several times. Took me hours of waiting in vain. My node is falling behind more and more.

My naive thinking is that this could have been a poor network state. But it could have been something else, as I notice these kinds of logs quite often:

mailbox-closed

and

connectionreset

Hope this issue helps shed some light or trigger some useful discussions 🙂

Thanks all!

Node lagging behind & CPU usage skyrockets (even on 68bfa84ed1455f891032434d37ccad696e91e4f5)

Hi all,

For the past days (after upgrading to commit 78ef2f55857d6118047efccf070ae0f7ddb232ea), my node has been running stably (CPU usage was stably below the optimal load). Yes, I'm aware some of us has issues with that commit and had to roll back to the previous commit, but this node was running okay, so I kept it as-is for the past few days.

However, for the past few hours, the node has started lagging behind: I notice it starts downloading headers & blocks all over again. During this whole "lag-behind" period, CPU usage always surges higher than the optimal load.

Finally, I decided to roll back to the previous commit 68bfa84ed1455f891032434d37ccad696e91e4f5.

I also adjusted the ideal_connections_lo and ideal_connections_hi to 20 & 25 respectively.

However, the situation hasn't got any better. Just now, the CPU usage skyrocketed to a whooping 10x its optimal load:

stakewars-iii-node-cpu-surge

My node was kicked-out as a result (obviously). At the moment, the only thing I could do is to restart the node whenever CPU utilization surges above the optimal load.

Naive observation: whenever the node is downloading headers or blocks, CPU usage surges. So, restarting the app occasionally is definitely not a sustainable approach, since the app can never (or very slowly) completely syncs. This for sure affects the uptime record of my node, so I'll probably not meet Challenge 9's criteria, if this keeps up 🙂

PS: this issue was filed for informational purposes. If anyone is interested in further investigation, and need me to provide anything else (e.g. log records), please let me know.

Thank you for your notice, and all the best! 💪

Starting/Restarting Node Kills Peers Due to Network Errors

Can something been done about the endless network errors on node [re]startup that prevents getting re-synced and back to a validating state quickly due to a low number of good peers vs before the restart? This delay interferes with block and chunk production and causes misses. Example:

image

zsh: command not found: lscpu & grep: invalid option -- P

Hello everyone,

I'm trying to do the 2nd challenge (challenges/002.md) and to check if I have the right config

lscpu | grep -P '(?=.*avx )(?=.*sse4.2 )(?=.*cx16 )(?=.*popcnt )' > /dev/null \
  && echo "Supported" \
  || echo "Not supported"

I have this errors :

grep: invalid option -- P
usage: grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C[num]]
	[-e pattern] [-f file] [--binary-files=value] [--color=when]
	[--context[=num]] [--directories=action] [--label] [--line-buffered]
	[--null] [pattern] [file ...]
zsh: command not found: lscpu

Do you have ideas about how to solve them ?

No chunks produced

I've been running node for 2 weeks. Updating already and follow the latest troubleshooting for not producing chunks. However, my chunks are not produced (As in below image).

Here is my VPS info:
8 vCPU Cores
30 GB RAM
200 GB NVMe
or 800 GB SSD
3 Snapshots
32 TB Traffic

chunk produce

291748067_1253039715508391_27383581298895877_n
missing chunk

Challenge 13 Proof

It is not feasible to complete proof of Challenge 13 properly with the shardnet builds because peer_id does not exist in the logs:

INFO stats: Server listening at peer_id=ed25519:Xx1XXX...XXXXxxx addr=0.0.0.0:24567

@ok-everstake @sb-everstake

Can not complete download header

It's seem may be have error in commit 68bfa84ed1455f891032434d37ccad696e91e4f5

Can not complete download header

Wrong chain_id (image bellow)

image

SHARDNET Validators API call with block_height: 2018939 is pointing to a epoch_start_height: 2008940, but this block doesn't exist

While I was writing a Python script for big-data challenge, detected a weird response from RPC server:

Step 1

Get validators method with block_height: 2018939
And save result.epoch_start_height (2008940)

BLOCK_HEIGHT_epoch_start_height=$(curl -s -X POST 'https://rpc.shardnet.near.org' \
-H 'Content-Type: application/json' \
-d '{
  "jsonrpc": "2.0",
  "id": "dontcare",
  "method": "validators",
  "params": [2018939]
}' -s | jq .result.epoch_start_height)

Setp 2

Call block method with the epoch_start_height

curl -s -X POST 'https://rpc.shardnet.near.org' \
-H 'Content-Type: application/json' \
-d '{
    "jsonrpc": "2.0",
    "id": "dontcare",
    "method": "block",
    "params": {
        "block_id": '$BLOCK_HEIGHT_epoch_start_height'
    }
}' -s | jq

Expected result

Validators method should answer 2008939 as epoch_start_height, instead of non-exiting block (2008940)
The 2008941 block says:
"prev_height": 2008939

Actual result

2008940 response is:

  "jsonrpc": "2.0",
  "error": {
    "name": "HANDLER_ERROR",
    "cause": {
      "info": {},
      "name": "UNKNOWN_BLOCK"
    },
    "code": -32000,
    "message": "Server error",
    "data": "DB Not Found Error: BLOCK HEIGHT: 2008940 \n Cause: Unknown"
  },
  "id": "dontcare"
}```

Syncing never finishes

The issue

The neard service runs and syncs successfully, but never finishes syncing. (See attached logs, gets stuck at 99%+)
My contract is up and pinging successfully. It has enough delegated near to be picked at validator, but I'm getting 0 uptime because node won't sync.
When I grab curl -s http://127.0.0.1:3030/status - status is syncing

Staking contract - tias.factory.shardnet.near

image

Troubleshooting

I've tried the following list of suggestions from @noderunner in discord:

1. Hardware and Internet meet min specs? 
sudo apt install speedtest-cli && speedtest-cli
 
https://www.vpsbenchmarks.com/
2. Firewall port 24567 open in OS and/or host?
3. Compiled nearcore with the recommended commit?
4. NEAR_ENV set to shardnet? 
export NEAR_ENV=shardnet

5. Wallet key and validator key match? 
near view xxx.factory.shardnet.near get_staking_key '{}' && cat ~/.near/validator_key.json | grep public_key

6. Pinging at least once per epoch and shows up in explorer? https://explorer.shardnet.near.org/accounts/xxx.factory.shardnet.near
7. Total pool stake amount is above minimum (50) & seat price?  
near validators next | grep "seat price"
 Need more NEAR? https://discord.com/channels/490367152054992913/1002631777560576040/1003022444409405560

I've also tried stopping and starting the service, rebooting the machine, deleting data folder and resyncing, none of it helped.

I am running on EC2 in Amazon Linux 2, the machine meets minimum requirements.

Please let me know if I can provide more information.

Update the command curl

In the 4th challenge
To Check Blocks Produced
The command curl needs -r to get raw data

curl -r -s -d '{"jsonrpc": "2.0", "method": "validators", "id": "dontcare", "params": [null]}' -H 'Content-Type: application/json' 127.0.0.1:3030 | jq -c '.result.current_validators[] | select(.account_id | contains ("POOL_ID"))'

http 404 link to rules from near.org/stakewars (low prio)

Shortly: Rules.md was changed to rules.md so original http links don't work anymore

StakeWars rules source:
https://github.com/near/stakewars-iii/blob/main/rules.md

was renamed from Rules.md to rules.md, so the "Read the Full Rules here" link used (also) on https://near.org/stakewars/ doesn't work anymore, http 404.

Tried google search (parameter "link") to figure out which capitalization is used more:
link:github.com/near/stakewars-iii/blob/main/rules.md

but not sure, so the easiest (but possibly confusing) would be to have both capitalization versions on github?
(or better symbolic link if possible)

Log DEBUG Reverting to On

I copied .near .near-credentials .near-config and script files from original to a new machine.

Later on, I synced only .near back to the original machine.

Today, I copied .near .near-credentials .near-config and script files from original to another new machine.

Despite the RUST_LOG and --verbose both being set to INFO in the service file, DEBUG reverts to being on in all of the above cases, blowing up the systemlog file massively.

registration form and rule number 3

Hello sir, what is meant about rule number 3?
Can you explain in more detail about rule number 3 below?🙏🙏🙏

Screenshot_20220724-070406_Chrome-picsay

and what should I fill in this part of the form?
does it ask for mainnet or shardnet ID?

Screenshot_20220724-070559_Chrome

Please clear my confusion here with your replies and answers.

Thanks you🙏🙏🙏

Parsing error when setting a blacklist in config.json

When I add a blacklist array to config.json and restart the node, I get a parsing error:

image

"blacklist": [
"ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567","ed25519:[email protected]:24567"
],

Using a string also gives a parsing error:

"blacklist": 
"ed25519:[email protected]:24567,ed25519:[email protected]:24567"
,

Question:

  • What is the right format for the blacklist ?
  • Is there a limit to the number of nodes being blacklisted ?

Thanks !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.