-
Notifications
You must be signed in to change notification settings - Fork 20.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eth/protocols/snap: generate storage trie from full dirty snap data #22668
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks correct to me. I guess we still can't get rid of the notary, since it's used for accounts?
Btw, I'm still working on this PR, so don't merge it yet. I've found an issue that pushes the wrong keys into the db, but managed to behave correctly (I have an extra short node on top XD). Also implemented the idea for accounts but need to confirm it works ok. After that we need to more massaging as currently the hasher ties up the loop thread causing timeouts on the requests among others. TL;DR it's workable, but there's work to be done still. |
0c18c4c
to
6574559
Compare
eth/protocols/snap/sync.go
Outdated
Next: next, | ||
Last: last, | ||
root: acc.Root, | ||
Next: next, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be better to use next
from res.hashes[i]
( or wherever the last returned key is?) instead of fetching the storage trie from 0000
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* eth/protocols/snap: make better use of delivered data * squashme * eth/protocols/snap: reduce chunking * squashme * eth/protocols/snap: reduce chunking further * eth/protocols/snap: break out hash range calculations * eth/protocols/snap: use sort.Search instead of looping * eth/protocols/snap: prevent crash on storage response with no keys * eth/protocols/snap: nitpicks all around * eth/protocols/snap: clear heal need on 1-chunk storage completion * eth/protocols/snap: fix range chunker, add tests Co-authored-by: Péter Szilágyi <peterke@gmail.com>
…thereum#22668) * eth/protocols/snap: generate storage trie from full dirty snap data * eth/protocols/snap: get rid of some more dead code * eth/protocols/snap: less frequent logs, also log during trie generation * eth/protocols/snap: implement dirty account range stack-hashing * eth/protocols/snap: don't loop on account trie generation * eth/protocols/snap: fix account format in trie * core, eth, ethdb: glue snap packets together, but not chunks * eth/protocols/snap: print completion log for snap phase * eth/protocols/snap: extended tests * eth/protocols/snap: make testcase pass * eth/protocols/snap: fix account stacktrie commit without defer * ethdb: fix key counts on reset * eth/protocols: fix typos * eth/protocols/snap: make better use of delivered data (#44) * eth/protocols/snap: make better use of delivered data * squashme * eth/protocols/snap: reduce chunking * squashme * eth/protocols/snap: reduce chunking further * eth/protocols/snap: break out hash range calculations * eth/protocols/snap: use sort.Search instead of looping * eth/protocols/snap: prevent crash on storage response with no keys * eth/protocols/snap: nitpicks all around * eth/protocols/snap: clear heal need on 1-chunk storage completion * eth/protocols/snap: fix range chunker, add tests Co-authored-by: Péter Szilágyi <peterke@gmail.com> * trie: fix test API error * eth/protocols/snap: fix some further liter issues * eth/protocols/snap: fix accidental batch reuse Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR implements an idea from @holiman, where instead of assembling subtries for each storage chunk received from the network, we drop all of them to disk first and then assemble the entire storage trie from the final disk content. The final trie might have faults in it if the pivot moves or sync is interrupted and resumed, but that will be fixed during the healing phase. The goal of this PR is to get rid of all the missing trie nodes on the request/reply chunk boundaries.
PR:
Master:
This PR seems to perform very well on testnets, but on mainnet where the sync time is longer, the database ends up thrashed either way. That's fine though, the goal of this PR is not to make things faster (although that's always nice), rather to permit dynamic packet sizes without making thing worse.