-
-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove fast-future #638
Remove fast-future #638
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about the exact same thing! 😂
After reading the iterator code again, I think we can further optimize the cache mechanism:
Maybe this will give us a performance gain? |
Lemme try to write a benchmark, so we can try those ideas. I'm thinking this benchmark will first insert a small data set (say 10K records), trigger a manual compaction, read once from start to end as a warmup, and then repeatedly read it from start to end. |
Shall we merge this PR so that I can rebase #640? |
I wanted to benchmark this first, but seeing as I'm not getting around to that, maybe yeah |
I've benchmarked these branches with an old script: https://gist.github.com/peakji/0fd6c1529951767697480e708806ae33#file-benchmark-js After running the script for multiple times, result shows the performance of
|
@peakji I expected As a next step, I think Level/community#70 is worth exploring. |
Actually it is 0.17% faster! 😄
👍 Shall we tag a new release first (maybe including #642)? |
5.1.1 |
We can remove
fast-future
, because the iterator's cache mechanism already prevents event loop starvation (as mentioned by @peakji in #327 (comment)) - unless you have single-byte keys and values. To handle that rare case we can move the internal counter offast-future
to the C++ ofleveldown
: it eithers stop iterating when it hits thehighWaterMark
(in bytes) or when the cache is 1000 elements long.