Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ethdb/pebble: switch to increasing level sizes #30602

Merged
merged 1 commit into from
Oct 15, 2024

Conversation

karalabe
Copy link
Member

Since forever we've been using 2MB database files, ever since the LevelDB era. We've sometimes attempted to change these to something different, but compaction-induced db writes always blew up disk IO tens fold.

A lot of time passed however, and nowadays we're using a completely different data scheme (path vs hash), which has much more localized writes. Out of curiosity I've ran a benchmark changing back the levels to exponentially increasing ones.

Naturally, the obvious effect is the number of files:

  • master branch with 2MB files across all levels has about 160K data files in the datastore.
  • pr branch has about 9.2K data files in the datastore.

These have been expected, but the more interesting question is how this affects performance (charts: green == master, yellow == PR)?

Full syncing the chain is ever so slightly faster with the PR. Approximately 12h across 13 days. Not bad, but also nothing special. Could be just differences in the machines, but even if realistic, 12h is always welcome.

Screenshot 2024-10-15 at 16 44 55

CPU usage wise, the leveled database (after some initial data pileup) uses half a CPU core less computation, most probably due to less compaction shuffling. This is a surprise, but a welcome one. Probably not something amazingly relevant, but it's never bad to lower resources.

Screenshot 2024-10-15 at 16 46 42

IO wait is as expected. With larger files, when compaction hits, there's more data to mobilize, so there should be more time waiting for data in general. That said, during a whole full sync we haven't ever hit disk limits, so it seems to be an acceptable compromise.

Screenshot 2024-10-15 at 16 48 53

The stat I was most worried about though, disk writes. Historically this is what blew up beyond acceptable levels. Pleasantly noticing, the PRs disk write hit is about 5% after an entire full sync. That is amazing.

Screenshot 2024-10-15 at 16 50 48

All in all, seems this change has relatively negligible performance implications, but in exchange reduces the number of database files from 160K to 10K. My 2c is that reducing the file count would be very valuable as on OSes where the file system might not handle many files very gracefully, this could be the difference between very fast and unusably slow.

@karalabe karalabe added this to the 1.14.12 milestone Oct 15, 2024
@karalabe karalabe requested a review from holiman October 15, 2024 13:54
Copy link
Contributor

@holiman holiman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants