-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate Block Fetching from Processing #4815
Comments
This code concerns the round robin initial sync process written here which basically requests blocks and then sequentially processes them through the block processing pipeline. This prevents us from requesting more blocks until processing is complete. An approach that is idiomatic with Go would be to use 2 goroutines, one for round robin block download and perhaps another for processing with both communicating via the usage of channels. We can use a buffered channel with a decent size buffer size (depending on how comfortable we are with memory usage) so that block downloading can fill that buffer to the brim while another routine shaves off items from that channel and processes them. This could radically help us stop the "choppiness" we see in initial sync regarding block retrieval via p2p and processing. @farazdagi please see #4222 (comment) as to guidelines on some basic contributions to tackle this. This could become a multi PR feature so that we can properly review it and add nice tests to ensure all works as expected. |
This is the current design doc for this issue: |
note (to myself): when working on block processing it is worth reviewing the #3898 |
In initial sync, currently block downloading and processing are tightly coupled together. However this leads to a penalty in our syncing speeds as we have to wait for other peers to return blocks back before being able to process them.
What we should be able to do is concurrently download blocks and also process them at the same time. What this will accomplish is allow us to seamlessly process one block after the next, while not being hampered by the response speeds of our peers.
The text was updated successfully, but these errors were encountered: