Comments (25)
Great news. Can I ask which commit of erigon you're using ?
I am on: 46071798492d88ece8b78e2a26fb16a4b273e7b0
from bsc-archive-snapshot.
Thanks for all the help :-)
from bsc-archive-snapshot.
Hmmm, can you try to query a contract at that block?
Erigon is still in alpha/beta stage, so I would not be surprised if there are some speed bumps.
from bsc-archive-snapshot.
It also might have to do with this:
ledgerwatch/erigon#3438
I have not updated my version of erigon in a while, so I'll try to find some time to getting this updated.
from bsc-archive-snapshot.
It also might have to do with this: ledgerwatch/erigon#3438
I have not updated my version of erigon in a while, so I'll try to find some time to getting this updated.
Yep this definitely looks like it.
Is there a way to resync all blocks since 14500000? A full resync would be a pain in the a** :)
from bsc-archive-snapshot.
Hmmm, can you try to query a contract at that block?
Erigon is still in alpha/beta stage, so I would not be surprised if there are some speed bumps.
I’ll try that later today. Away from my terminal right now.
Would you be able to call getTransactionReceipt for transaction 0xc46fdb4d59a2c77fe347a021fdfda51e230fd4f770da3ca093d9dc758c32f725 on your side to validate if the issue is isolated to me?
Also I am running the latest devel version of erigon on my side. I created the node using your snapshot yesterday. My latest blocks are not showing the error, so my hypothesis is you could be running a version of erigon that’s not patched for the issue and is compromising blocks since ~1 month ago
from bsc-archive-snapshot.
It does appear to work for me. I am using the following script:
const ethers = require('ethers');
const provider = new ethers.providers.JsonRpcProvider('http://127.0.0.1:8545');
provider.getTransactionReceipt('0xc46fdb4d59a2c77fe347a021fdfda51e230fd4f770da3ca093d9dc758c32f725').then(data => {
console.log(data);
});
And returns:
{
to: '0x10ED43C718714eb63d5aA57B78B54704E256024E',
from: '0x58acEc799F8bb38A47f6719B4bd2DBd1bA9B3644',
contractAddress: null,
transactionIndex: 273,
gasUsed: BigNumber { _hex: '0x01f5ce', _isBigNumber: true },
logsBloom: '0x00200a00000000000000000080000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000200000000000000000002004000008000000200000000000400000000400010000000000000000000000900000000000000000000000000000240000000010000000000000000000000000000000000000000000240000000000080000004000200000020000000000000000000000020000000000000000000000000000000000000000000002000000000000000000004000100000000000001000000002008080000010000000000000000000000000000000000000000000000000000000000000',
blockHash: '0x7cca3e99c68f0cda13ec1a1a48769cbf420c690f457d23677b0a1d8cb2bd83df',
transactionHash: '0xc46fdb4d59a2c77fe347a021fdfda51e230fd4f770da3ca093d9dc758c32f725',
logs: [
{
transactionIndex: 273,
blockNumber: 14794507,
transactionHash: '0xc46fdb4d59a2c77fe347a021fdfda51e230fd4f770da3ca093d9dc758c32f725',
address: '0x00e1656e45f18ec6747F5a8496Fd39B50b38396D',
topics: [Array],
data: '0x00000000000000000000000000000000000000000000000f2dc7d47f15600000',
logIndex: 867,
blockHash: '0x7cca3e99c68f0cda13ec1a1a48769cbf420c690f457d23677b0a1d8cb2bd83df'
},
{
transactionIndex: 273,
blockNumber: 14794507,
transactionHash: '0xc46fdb4d59a2c77fe347a021fdfda51e230fd4f770da3ca093d9dc758c32f725',
address: '0x00e1656e45f18ec6747F5a8496Fd39B50b38396D',
topics: [Array],
data: '0xffffffffffffffffffffffffffffffffffffffffffffffee6941766ef4aaab3e',
logIndex: 868,
blockHash: '0x7cca3e99c68f0cda13ec1a1a48769cbf420c690f457d23677b0a1d8cb2bd83df'
},
{
transactionIndex: 273,
blockNumber: 14794507,
transactionHash: '0xc46fdb4d59a2c77fe347a021fdfda51e230fd4f770da3ca093d9dc758c32f725',
address: '0xbb4CdB9CBd36B01bD1cBaEBF2De08d9173bc095c',
topics: [Array],
data: '0x00000000000000000000000000000000000000000000000011fe039f65046575',
logIndex: 869,
blockHash: '0x7cca3e99c68f0cda13ec1a1a48769cbf420c690f457d23677b0a1d8cb2bd83df'
},
{
transactionIndex: 273,
blockNumber: 14794507,
transactionHash: '0xc46fdb4d59a2c77fe347a021fdfda51e230fd4f770da3ca093d9dc758c32f725',
address: '0x2Eebe0C34da9ba65521E98CBaA7D97496d05f489',
topics: [Array],
data: '0x000000000000000000000000000000000000000000007a6b9f2ef2efcb85283a000000000000000000000000000000000000000000000091678546675f43c698',
logIndex: 870,
blockHash: '0x7cca3e99c68f0cda13ec1a1a48769cbf420c690f457d23677b0a1d8cb2bd83df'
},
{
transactionIndex: 273,
blockNumber: 14794507,
transactionHash: '0xc46fdb4d59a2c77fe347a021fdfda51e230fd4f770da3ca093d9dc758c32f725',
address: '0x2Eebe0C34da9ba65521E98CBaA7D97496d05f489',
topics: [Array],
data: '0x00000000000000000000000000000000000000000000000f2dc7d47f156000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000011fe039f65046575',
logIndex: 871,
blockHash: '0x7cca3e99c68f0cda13ec1a1a48769cbf420c690f457d23677b0a1d8cb2bd83df'
},
{
transactionIndex: 273,
blockNumber: 14794507,
transactionHash: '0xc46fdb4d59a2c77fe347a021fdfda51e230fd4f770da3ca093d9dc758c32f725',
address: '0xbb4CdB9CBd36B01bD1cBaEBF2De08d9173bc095c',
topics: [Array],
data: '0x00000000000000000000000000000000000000000000000011fe039f65046575',
logIndex: 872,
blockHash: '0x7cca3e99c68f0cda13ec1a1a48769cbf420c690f457d23677b0a1d8cb2bd83df'
}
],
blockNumber: 14794507,
confirmations: 803886,
cumulativeGasUsed: BigNumber { _hex: '0x028fcc88', _isBigNumber: true },
effectiveGasPrice: BigNumber { _hex: '0x012a05f200', _isBigNumber: true },
status: 1,
type: 0,
byzantium: true
}
I am currently using this version of erigon
: ff4fa5efd35f4bf609d34293b904c1af8d8b5ad8
.
You can try building that specific version to see if it fixes it.
It is also unlikely, but possible that your download was somehow corrupted, a fresh download might help.
from bsc-archive-snapshot.
Hey, thanks so much for giving this a try and replying again here :)
When running this query, did you notice anything in your rpcdaemon stdout/stderr? The error message I mentioned in my first post popped up there, not in the api call response.
from bsc-archive-snapshot.
Yes I do see this issue. I will attempt to resync the chain, but this will take some time. This does appear to be fixed on the lastest checkout though.
For what it's worth, I doubt most people will need the transactions it skips. It appears that these are system transactions, which 99% of people probably don't need unless you are wanting to keep super precise logs on everything (ie: this should not affect most wallet queries).
from bsc-archive-snapshot.
Ok, thanks for confirming that the issue is not just on my side. I'm attempting an unwind to a block prior to when this issue started to take place using --bad.block=0xc4cd50a70375e98b264eb32c464bde4e31af7d053f7e7cb98dfc1bbbc6af7377
to try and avoid a full resync. It's been going for about a day now. Will keep you posted on that.
And yes, to your point the impact is probably nonexistent for most people, but I want to use the data for analytics purposes so need the cleanest data possible :-)
from bsc-archive-snapshot.
Yes, I am already in the process of doing this. It does look like --bad.block
will do what we want (from my reading of the code), but might take some hacking of the code in order to get it to ensure it doesn't persist the bad block.
I bisected the bad block to: 14515943
. I'm going to resync form block: 14510000
just to be safe.
from bsc-archive-snapshot.
It looks like it is working, but won't know for a week or so. What I did was:
- Update to latest code in
devel
branch (0cac29d1d2021562722dd6dec5d36f170d5a0a84
) - [not-needed by must users] Stop cron to upload once per day
- Set
--bad.block=0x081e77856fe4b03f1856bb9bf41c7341f9148395d91d414d76ed4ac3c09abf54
in thedocker-compose.yml
file - Start docker-compose, once it starts processing then...
- Stopped docker-compose
- Removed the flag from step 3
- Run start docker-compose again.
I'll post an update in about 2 days.
from bsc-archive-snapshot.
Ha, great that you took the time to narrow down the bad block #. I didn't do that and am unwinding to 14500000
. Oh well.
from bsc-archive-snapshot.
Unwinding has failed 3 times already on my side. Erigon crashes in stage 9 with no error message.
[INFO] [02-27|14:13:57.714] [9/16 IntermediateHashes] ETL [1/2] Extracting from=StorageChangeSet current key=0000000000ed4d75bb4cdb9cbd36b01bd1cbaebf2de08d9173bc095c0000000000000001 alloc="3.2 GiB" sys="6.3 GiB"
[INFO] [02-27|14:14:11.065] Flushed buffer file name=/erigon/data/erigon/etl-temp/erigon-sortable-buf-1347755927 alloc="3.8 GiB" sys="6.3 GiB"
[INFO] [02-27|14:14:15.879] [p2p] GoodPeers eth66=27
[INFO] [02-27|14:16:15.879] [p2p] GoodPeers eth66=28
[INFO] [02-27|14:18:15.880] [p2p] GoodPeers eth66=27
[INFO] [02-27|14:20:15.879] [p2p] GoodPeers eth66=28
Seems like it's not going to work on my end.
Question for you: did you keep any snapshot from block ~14510000
?
from bsc-archive-snapshot.
Mine is still going, I'm on a very under-powered AWS instance (I may upgrade it though).
I'm on stage 12, but it is still going.
Are you sure it crashed? I read through the code and most steps don't publish anything in the logs while doing an unwind.
I also added 100 gigs of swap last night, because when reading the code I realized it loads most of the stuff in memory for what it wants to delete. This morning when I looked at it, it went well beyond 50 gigs of memory and I only have 30 gigs of memory on this instance.
from bsc-archive-snapshot.
Yep it definitely crashed on my side. ps aux | grep erigon
does not show any signs of life. That happened roughly 24h after start of unwinding. And no error in the logs.
I'm running on a i3en.2xlarge
instance (64GB of memory).
I'm curious to hear if it works out on your side. As a side question, do you by any chance keep older snapshots in storage?
from bsc-archive-snapshot.
No, sadly I only kept the last 3 days. In hindsight I should have kept version in archive storage once-per-month.
Yeah, you probably ran out of memory. I'm spinning up a new instance right now with is4gen.8xlarge
. This should be able to unwind much faster (as it has 4x NVMes and it looks like it's mostly disk IO bound). By default I use im4gn.2xlarge
as it is just enough to fit everything on disk when compressed and barely able to keep up with the chain.
from bsc-archive-snapshot.
Gotcha, hope it works out.
In parallel I am going to spin up a new machine and start syncing from scratch, in case that's our only way out.
from bsc-archive-snapshot.
Update: I was able to get erigon to roll back all the way to block 14509999
. I am now syncing upwards. I did discover a bug in erigon
though that it would not download new blocks after rolling back so far. I * think * I fixed it by adding some custom hacks in to force it to forget about some residual state.
I am hoping that there won't be any more of these unwind state bugs.
from bsc-archive-snapshot.
Wow. Brilliant. Hope it works out!
from bsc-archive-snapshot.
It is now at the Execution
stage (Stage 6). The bad news is it appears it could take up to a month to catch back up. It's able to process about 4-6 blocks per second and there's quite a few more steps after this stage is done.
Stay tuned. I'll be uploading partial updates once per day in the following folder: s3://public-blockchain-snapshots/bsc/
from bsc-archive-snapshot.
What a pain. I've started syncing from scratch a few days ago. Also in Execution stage right now. Seems like about 19 days to go. I can share a clean snapshot once that's done.
from bsc-archive-snapshot.
What a pain. I've started syncing from scratch a few days ago. Also in Execution stage right now. Seems like about 19 days to go. I can share a clean snapshot once that's done.
I suspect it will take a bit longer than that. IIRC around October of last year it starts taking an extremely long time per block to run the Execution
stage.
from bsc-archive-snapshot.
I have good news. I have a chain that is back-up-to-date and it appears the issue is resolved.
I am in the process of uploading now. I expect it to be done uploading in about 3-4 hours.
(It turned out the estimator code does not work properly and gives wildly wrong estimates)
from bsc-archive-snapshot.
I have good news. I have a chain that is back-up-to-date and it appears the issue is resolved.
I am in the process of uploading now. I expect it to be done uploading in about 3-4 hours.
(It turned out the estimator code does not work properly and gives wildly wrong estimates)
Great news. Can I ask which commit of erigon you're using ?
from bsc-archive-snapshot.
Related Issues (20)
- how login account HOT 1
- AWS fatal error: Unable to locate credentials HOT 10
- Problem when calling the ListObjectsV2 operation: Access Denied HOT 3
- The whole snapshot only 500GB? HOT 3
- ETH vs BSC Archive Nodes` HOT 1
- fatal error: An error occurred (404) when calling the HeadObject operation: Key "bsc/erigon-latest.tar.zstd" does not exist HOT 1
- Download failed HOT 42
- Download snapshots failed HOT 7
- How to store 30TB in AWS EBS? HOT 4
- Sync slow HOT 8
- fail to start bsc archive with new snapshot HOT 5
- Empty file after downloading erigon-bsc-archive HOT 2
- Current version failure to download/run... HOT 2
- The script stopped working HOT 3
- Script finishes with no errors at 568GB. HOT 9
- getting block not found for older blocks HOT 1
- Always about 8 minutes behind the latest block HOT 5
- S3 folder structure possibly wrong HOT 4
- On (probable) discontinuation of erigon support for BSC HOT 4
- Slow synchronization HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bsc-archive-snapshot.