Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update rocksdb.go #218

Merged
merged 1 commit into from
Feb 15, 2022
Merged

Update rocksdb.go #218

merged 1 commit into from
Feb 15, 2022

Conversation

faddat
Copy link
Contributor

@faddat faddat commented Feb 15, 2022

Today I'm working to make rocks an acceptable default, and this seems to help. I got some of this from terra's work on tm-db.

If desired, I could merge my version of this with more options illustrated for the user, but commented out:

https://github.com/notional-labs/tm-db/blob/rocks-tuning/rocksdb.go

I should also highlight Terra's work here:

master...terra-money:performance

And finally I should mention that a company that specalizes in rocksdb performance enhancements has reached out to me. I bungled scheduling last week, reckon they will end up reading this PR, and hope to succeed with scheduling this week.

I figure it's helpful for me to show exactly what I am working on and with-- basically, Juno has has a longstanding sync issue. So I am trying to make it possible to do a pure-rocks node with Osmosis' iavl updates. This way, it'll be as easy as possible for people to set up archives.

I'm then going to go through every version of juno and apply the changes. So far, Terra's work seems to perform best.

Currently I am always talking in generalities. I am not sure what the best way to look at this stuff is:

  • blocks per second synced?
  • queries per second against a fully synced archive node without making the node fall behind the tip?
  • something else?

Happy to compare notes with anyone.

Furthermore, it's notable that this will immediately apply to any 44 series chain, including osmosis and gaia.

@codecov
Copy link

codecov bot commented Feb 15, 2022

Codecov Report

Merging #218 (e6d403a) into master (9ea19a1) will increase coverage by 0.02%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #218      +/-   ##
==========================================
+ Coverage   68.51%   68.54%   +0.02%     
==========================================
  Files          27       27              
  Lines        2128     2130       +2     
==========================================
+ Hits         1458     1460       +2     
  Misses        595      595              
  Partials       75       75              
Impacted Files Coverage Δ
rocksdb.go 72.26% <100.00%> (+0.41%) ⬆️
Impacted Files Coverage Δ
rocksdb.go 72.26% <100.00%> (+0.41%) ⬆️

Copy link
Contributor

@tac0turtle tac0turtle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a benchmark to prove the improvement?

@faddat
Copy link
Contributor Author

faddat commented Feb 15, 2022

In terms of benchmark, not yet.

Do you have any suggestions on benchmarking the performance of mainnet nodes? That is what I would like to benchmark.

@faddat faddat closed this Feb 15, 2022
@faddat faddat reopened this Feb 15, 2022
Copy link
Contributor

@ValarDragon ValarDragon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

The default is 1000 files (defined here: https://github.com/tecbot/gorocksdb/blob/master/options.go#L373-L380 )

Reading the comments there, it appears to me that this could worst case increase some RAM usage from 2GB to 8GB then. (2MB working set per file)

This doesn't sound crazy, but maybe indicative of something that should eventually go into a config file.

I imagine its not really increasing RAM though, and thats just me misunderstanding. I on brief glance suspect it has to do with LSM restructurings on large writes & compactions.

@creachadair
Copy link
Contributor

Do we need to be concerned about expanding existing deployments' RAM footprint by a factor of approximately 4, if they were to patch this update? It seems like a lot of deployments depend on container orchestration. How does the operator know they need to bump up their RAM allocation (and possibly their instance sizes)?

@ValarDragon
Copy link
Contributor

RocksDB usage right now is solely experimental AFAIK, seems like a thing that can be communicated with a warning due to it not being actively used as of yet

@creachadair
Copy link
Contributor

I guess we'll see how it shakes out. 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants