-
Notifications
You must be signed in to change notification settings - Fork 20.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core/state: parallelise parts of state commit #29681
Conversation
// that the account was destructed and then resurrected in the same block. | ||
// In this case, the node set is shared by both accounts. | ||
lock.Lock() | ||
defer lock.Unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a quick note, this locking scheme here will definitely affect performance. You're spawning concurrent work but it'll all have to synchronize after. It would be better to pre-allocate a slice of suitable size to track the output sets (or even have it as a field in StateObject), then perform the merge operation and counter updates after all commits are done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I were to wait until all results are in and then merge, then the merge wound't proceed concurrnetly with the disk reads. IMO that would be worse.
I could make a channel and stream the results out and have an outer goroutine listen and merge one by one, but that would just end up being the same as locking and allowing access one by one via the mutex.
Don't really see why it would be different.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The channel way would also work, and the channel could even be buffered. Writes on buffered channels are fast even under contention AFAIK, so could be an idea.
* core/state, internal/workerpool: parallelize parts of state commit * core, internal: move workerpool into syncx * core/state: use errgroups, commit accounts concurrently * core: resurrect detailed commit timers to almost-accuracy
* core/state, internal/workerpool: parallelize parts of state commit * core, internal: move workerpool into syncx * core/state: use errgroups, commit accounts concurrently * core: resurrect detailed commit timers to almost-accuracy
This PR attempts to run bits and pieces of statedb.Commit concurrently to one another. The observation is that even though the code is very light, concurrency does seem to make a difference. Will post some benchmark results. Edit:
Also, I think it's possible to make the account update run concurrently with the storage updates, which might shave even more off. Will need to run that as a benchmark too. Edit: