Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use fold/reduce to avoid locks in shrink accounts collection #34042

Closed

Conversation

HaoranYi
Copy link
Contributor

Problem

Motivated by #34011, we can use
fold/reduce to avoid locks in shrink accounts collection.

Summary of Changes

fold/reduce to avoid locks in shrink accounts collection.

Fixes #

@HaoranYi HaoranYi force-pushed the nov_11_par_map_shrink_collect branch 3 times, most recently from e89d2dd to 1edcd31 Compare November 13, 2023 21:52
Copy link

codecov bot commented Nov 13, 2023

Codecov Report

Merging #34042 (54e6720) into master (a25eae4) will increase coverage by 0.0%.
Report is 3 commits behind head on master.
The diff coverage is 96.5%.

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #34042   +/-   ##
=======================================
  Coverage    81.9%    81.9%           
=======================================
  Files         811      811           
  Lines      219569   219573    +4     
=======================================
+ Hits       179901   179920   +19     
+ Misses      39668    39653   -15     

@HaoranYi HaoranYi force-pushed the nov_11_par_map_shrink_collect branch from 1edcd31 to 71ef71a Compare November 13, 2023 23:34
Copy link
Contributor

@brooksprumo brooksprumo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love it. Are there perf numbers for before and after?

accounts-db/src/accounts_db.rs Show resolved Hide resolved
accounts-db/src/accounts_db.rs Outdated Show resolved Hide resolved
@HaoranYi
Copy link
Contributor Author

Love it. Are there perf numbers for before and after?

No. do you have a good way to bench this change?

@brooksprumo
Copy link
Contributor

Love it. Are there perf numbers for before and after?

No. do you have a good way to bench this change?

Looks like there's a Measure around the fold+reduce, index_read_elapsed. And it's put into the ShrinkStats. This is then reported as a datapoint with the name "shrink_stats", and the field is also "index_read_elapsed".

I think if you use the same snapshot and do not skip the initial shrink, then that may be a way to compare?

Another option is to spin up two nodes; one with this change and one without. Let them run for a bit, then swap. This should let you compare with and without this change.

@HaoranYi
Copy link
Contributor Author

I tried ledger tool verify but it seems not hitting shrink during the run. I am going to spin up two nodes to compare instead.

@HaoranYi
Copy link
Contributor Author

image
red: master
blue: this pr

it seems to be slightly better with this pr, but also shows lots of noise.

@brooksprumo
Copy link
Contributor

brooksprumo commented Nov 14, 2023

it seems to be slightly better with this pr, but also shows lots of noise.

Nice. Now can you swap which node is running the pr vs master? That'll add another set of datapoints to compare.

We'll want to see (ideally) that when the node goes from PR to master, its perf drops. And the node that goes from master to PR, its perf increases.

@HaoranYi
Copy link
Contributor Author

image
swapped.
red: this pr
blue: master
This time blue is still better. looks like dev7 is always better. So the performance improvement is not conclusive.

@brooksprumo
Copy link
Contributor

Yeah, it's hard for me to draw conclusions. How do you recommend we proceed?

@HaoranYi
Copy link
Contributor Author

HaoranYi commented Nov 14, 2023

Since there is no much improvement, let's close it.

@HaoranYi HaoranYi closed this Nov 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants