-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a per keeper cache in the context #2193
Comments
Haha I think we both had the same idea - #2194. |
We can't use the context though, not without adding logic to clear the cache on failed transactions. |
I don't think clearing the cache should be hard. We already have logic to reset the context, this should be an easy add-on. If we want to make any performance improvements prelaunch, benchmarks indicate that we should do this caching, and minimize the number of amino calls in staking's UpdateValidator / governance slashing. |
We can define a new interface |
We don't want this mounted in the root multi store though, right? No information here should be persisted. (No other Gaia implementation should need to be aware of the existence of this) An InterfaceStore seems more confusing than a cache inside of the context imo. |
Oh, I thought this cache will persist for a block(just like |
I didn't really describe it well at the top of the issue. (Nor did I have a fully thought idea then though) This was moreso for caching things like amino decodings of params. We don't really want this cache to perish every block though, instead it'd be better to have a much more long-lived cache. I think we should implement this as a proper LRU. E.g. we have a queue, and a map. First check if its in the map, if so acquire a pointer to relevant queue node and move that to the front of the queue. (This can be done in a way to avoid having to reset the pointer in the hashmap) When adding a new element, pop the last element off of the queue. The context should have access to this cache, but it should not clear the cache. (The cache handles its own size management) Another interesting idea based off @cwgoes comment is that we could instead cache the values themselves, and not just be caching the decoding process. That does seem a bit harder to implement safely, you're right. The purpose of that would then be to avoid the store.Get() as well. I'm not sure what would be a good way to do that, but for that particular case we should benchmark kvstore get time vs hashmap hashing time. |
Are we storing generic interfaces in this cache? If so, I don't recommend implementing our own LRU as it's trivial to implement...just use this unless we want our own custom logic/handlers. |
I hadn't considered when I wrote the above that there could be check txs and deliver txs in parallel. We don't watch such a cache locking (such as in the LRU you linked), that would cause more harm than good. We could instead keep a "LRU cache pool", and allocate one on every BeginBlock, and return it on EndBlock. Thus theres no fear of concurrent access, and so we don't need any locks. (As just the lock / unlock process is slow) This actually probably would be easier to implement as a store. |
I confused the purpose of this cache with #1947, will start implementing it. |
I think we can punt this to post-launch. |
This is not a perf bottleneck right now |
Summary
Create a cache which can be used in the context, which can be accessed per keeper.
Problem Definition
Benchmarks in #1924 show that amino unserialization is the clear bottleneck in our system. There are alot of state reads / amino reads that happen per tx. The example that comes to mind is https://github.com/cosmos/cosmos-sdk/blob/develop/x/stake/keeper/keeper.go#L50. Keeper.GetParams is called very often, however each one has an amino deserialization. We should cache this value in the context, and when updating do the state write and update the context.
Proposal
Add a map from string to interface{} in the ctx. This will let us avoid the amino reads. (and hashmap hashing is fairly quick).
The text was updated successfully, but these errors were encountered: