Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a per keeper cache in the context #2193

Closed
ValarDragon opened this issue Aug 30, 2018 · 12 comments
Closed

Create a per keeper cache in the context #2193

ValarDragon opened this issue Aug 30, 2018 · 12 comments
Labels
S:proposed T: Performance Performance improvements

Comments

@ValarDragon
Copy link
Contributor

Summary

Create a cache which can be used in the context, which can be accessed per keeper.

Problem Definition

Benchmarks in #1924 show that amino unserialization is the clear bottleneck in our system. There are alot of state reads / amino reads that happen per tx. The example that comes to mind is https://github.com/cosmos/cosmos-sdk/blob/develop/x/stake/keeper/keeper.go#L50. Keeper.GetParams is called very often, however each one has an amino deserialization. We should cache this value in the context, and when updating do the state write and update the context.

Proposal

Add a map from string to interface{} in the ctx. This will let us avoid the amino reads. (and hashmap hashing is fairly quick).

@cwgoes
Copy link
Contributor

cwgoes commented Aug 30, 2018

Haha I think we both had the same idea - #2194.

@cwgoes
Copy link
Contributor

cwgoes commented Aug 30, 2018

We can't use the context though, not without adding logic to clear the cache on failed transactions.

@ValarDragon
Copy link
Contributor Author

ValarDragon commented Aug 31, 2018

I don't think clearing the cache should be hard. We already have logic to reset the context, this should be an easy add-on.

If we want to make any performance improvements prelaunch, benchmarks indicate that we should do this caching, and minimize the number of amino calls in staking's UpdateValidator / governance slashing.

@mossid
Copy link
Contributor

mossid commented Sep 14, 2018

We can define a new interface InterfaceStore in types/store.go which is similar to KVStore but takes interface{} instead of []byte for values. Then we can mount it to the rootMultiStore (it stores CommitStore so type assertion happens anyway), and copy the logic from cacheKVStore to wrap it. Then we can apply the existing store architecture without making an exception.

@ValarDragon
Copy link
Contributor Author

We don't want this mounted in the root multi store though, right? No information here should be persisted. (No other Gaia implementation should need to be aware of the existence of this)

An InterfaceStore seems more confusing than a cache inside of the context imo.

@mossid
Copy link
Contributor

mossid commented Sep 14, 2018

Oh, I thought this cache will persist for a block(just like transientStore). Does cache live in a transaction?

@ValarDragon
Copy link
Contributor Author

ValarDragon commented Sep 14, 2018

I didn't really describe it well at the top of the issue. (Nor did I have a fully thought idea then though)

This was moreso for caching things like amino decodings of params. We don't really want this cache to perish every block though, instead it'd be better to have a much more long-lived cache. I think we should implement this as a proper LRU. E.g. we have a queue, and a map. First check if its in the map, if so acquire a pointer to relevant queue node and move that to the front of the queue. (This can be done in a way to avoid having to reset the pointer in the hashmap) When adding a new element, pop the last element off of the queue.

The context should have access to this cache, but it should not clear the cache. (The cache handles its own size management)

Another interesting idea based off @cwgoes comment is that we could instead cache the values themselves, and not just be caching the decoding process. That does seem a bit harder to implement safely, you're right. The purpose of that would then be to avoid the store.Get() as well. I'm not sure what would be a good way to do that, but for that particular case we should benchmark kvstore get time vs hashmap hashing time.

@alexanderbez
Copy link
Contributor

Are we storing generic interfaces in this cache? If so, I don't recommend implementing our own LRU as it's trivial to implement...just use this unless we want our own custom logic/handlers.

@ValarDragon
Copy link
Contributor Author

ValarDragon commented Sep 14, 2018

I hadn't considered when I wrote the above that there could be check txs and deliver txs in parallel. We don't watch such a cache locking (such as in the LRU you linked), that would cause more harm than good. We could instead keep a "LRU cache pool", and allocate one on every BeginBlock, and return it on EndBlock. Thus theres no fear of concurrent access, and so we don't need any locks. (As just the lock / unlock process is slow)

This actually probably would be easier to implement as a store.

@mossid mossid mentioned this issue Oct 16, 2018
5 tasks
@mossid
Copy link
Contributor

mossid commented Oct 18, 2018

I confused the purpose of this cache with #1947, will start implementing it.

@cwgoes
Copy link
Contributor

cwgoes commented Jan 16, 2019

I think we can punt this to post-launch.

@ValarDragon
Copy link
Contributor Author

This is not a perf bottleneck right now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S:proposed T: Performance Performance improvements
Projects
None yet
Development

No branches or pull requests

6 participants