Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ReadCache option to LevelDB and RocksDB #255

Closed

Conversation

Qiao-Jin
Copy link
Contributor

@Qiao-Jin Qiao-Jin commented Jun 1, 2020

This PR adds a "ReadCache" option to LevelDB and RocksDb, which determines whether db will employ read cache (currently no).

This option helps to cache db data across block priod. Currently snapshot only supports data cache within a block period.

@shargon
Copy link
Member

shargon commented Jun 1, 2020

There are any cons?

@Qiao-Jin
Copy link
Contributor Author

Qiao-Jin commented Jun 1, 2020

There are any cons?

There should be benefits, like if there exist some often-called address / contracts. Currently snapshot only serves within one block period, or in other words, need to fetch same piece of data from db again even if it has been fetched in last block. This PR adds readcache to system and will solve such problems.

shargon
shargon previously approved these changes Jun 1, 2020
Copy link
Member

@erikzhang erikzhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has a big disadvantage. Its cache will be duplicated with DataCache, thereby reducing performance.

@Qiao-Jin
Copy link
Contributor Author

Qiao-Jin commented Jun 2, 2020

This has a big disadvantage. Its cache will be duplicated with DataCache, thereby reducing performance.

The default LRU cache size of levelDB is only 8 MB, which will not influence performance I think. Furthermore after persisting block snapshot will be flushed and we will need to fetch any data from harddisk again, if readcache is not enabled.

@erikzhang
Copy link
Member

Has it been tested? Has performance improved?

@Qiao-Jin
Copy link
Contributor Author

Qiao-Jin commented Jun 4, 2020

Has it been tested? Has performance improved?

TPS is almost the same, but this is due to monotonous transaction data. Its effect should be better in real world env where the size of cached data is much bigger.

Copy link
Member

@vncoelho vncoelho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@joeqian10, perhaps if we are not sure about the performance we can set ReadCache=false as default and merge the PR, when we have more experiments we change the default value.

@joeqian10
Copy link
Contributor

@vncoelho I think you are @Qiao-Jin here, hahaha😂

@Qiao-Jin
Copy link
Contributor Author

Qiao-Jin commented Jun 8, 2020

@joeqian10, perhaps if we are not sure about the performance we can set ReadCache=false as default and merge the PR, when we have more experiments we change the default value.

Yes I think so.

@erikzhang
Copy link
Member

perhaps if we are not sure about the performance we can set ReadCache=false as default and merge the PR, when we have more experiments we change the default value.

Or should we test it first before merge?

@vncoelho
Copy link
Member

vncoelho commented Jun 8, 2020

I think we should test if it works, @erikzhang, however, we need a better setup for general tests on performance. We have been working on this and NGD did good experiments in that PRs about the ConsensusPrepareRequest caching neo-project/neo#949.

I have some scripts in R that automatically generates nice graphs and test like ANOVA for comparisons in multiple axis, I used all them in my thesis and other works about optimization. We need to create some kind of repository that creates this automatic performance analysis in the PRs, in which we just pass commits as references and it compares intensively. That would surely brings us a lot of tranquility in decision making.

@Qiao-Jin Qiao-Jin closed this Dec 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants