-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update rocksdb.go #218
Update rocksdb.go #218
Conversation
Codecov Report
@@ Coverage Diff @@
## master #218 +/- ##
==========================================
+ Coverage 68.51% 68.54% +0.02%
==========================================
Files 27 27
Lines 2128 2130 +2
==========================================
+ Hits 1458 1460 +2
Misses 595 595
Partials 75 75
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a benchmark to prove the improvement?
In terms of benchmark, not yet. Do you have any suggestions on benchmarking the performance of mainnet nodes? That is what I would like to benchmark. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
The default is 1000 files (defined here: https://github.com/tecbot/gorocksdb/blob/master/options.go#L373-L380 )
Reading the comments there, it appears to me that this could worst case increase some RAM usage from 2GB to 8GB then. (2MB working set per file)
This doesn't sound crazy, but maybe indicative of something that should eventually go into a config file.
I imagine its not really increasing RAM though, and thats just me misunderstanding. I on brief glance suspect it has to do with LSM restructurings on large writes & compactions.
Do we need to be concerned about expanding existing deployments' RAM footprint by a factor of approximately 4, if they were to patch this update? It seems like a lot of deployments depend on container orchestration. How does the operator know they need to bump up their RAM allocation (and possibly their instance sizes)? |
RocksDB usage right now is solely experimental AFAIK, seems like a thing that can be communicated with a warning due to it not being actively used as of yet |
I guess we'll see how it shakes out. 🙂 |
Today I'm working to make rocks an acceptable default, and this seems to help. I got some of this from terra's work on tm-db.
If desired, I could merge my version of this with more options illustrated for the user, but commented out:
https://github.com/notional-labs/tm-db/blob/rocks-tuning/rocksdb.go
I should also highlight Terra's work here:
master...terra-money:performance
And finally I should mention that a company that specalizes in rocksdb performance enhancements has reached out to me. I bungled scheduling last week, reckon they will end up reading this PR, and hope to succeed with scheduling this week.
I figure it's helpful for me to show exactly what I am working on and with-- basically, Juno has has a longstanding sync issue. So I am trying to make it possible to do a pure-rocks node with Osmosis' iavl updates. This way, it'll be as easy as possible for people to set up archives.
I'm then going to go through every version of juno and apply the changes. So far, Terra's work seems to perform best.
Currently I am always talking in generalities. I am not sure what the best way to look at this stuff is:
Happy to compare notes with anyone.
Furthermore, it's notable that this will immediately apply to any 44 series chain, including osmosis and gaia.