-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Implement BCMath LRU cache #14107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement BCMath LRU cache #14107
Conversation
cc @Girgias @SakiTakamachi Let me know if you're aware of real-world test cases we could try this on and what your opinion on something like this is. |
A typical use case for BCMath is money calculations. That is, if BCMath needs to convert the same string multiple times, it could be a value such as an interest rate percentage, a discount percentage, a tax rate, or a fixed discount amount. Those values should be relatively short, so the benefit may not be much. (These are the only answers I can think of right now, there may be other use cases.) |
Right, would still be nice to have specific numbers. |
Yes, that use case definitely exists. Note however that if BCMath were to support object types, it would keep bc_num inside the object, so this cache may be dedicated to existing functions only. |
You might want to use the BcMathCalculatorBench included in MoneyPHP. |
Thanks for the pointer, this will be useful for our testing! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks logical, will wait for benchmark results
Running the moneyphp benchmark showed that this has no benefit, at least for that testcase: I see some small gains and some larger losses. |
Closing this as this will not move forward in its current state. |
This is a prototype for an LRU cache for operands used in BCMath.
We notice in profiles that parsing operands takes a long time (like over 30% for the benchmark in #14076).
This PR implements a cache for the 16 most recently used operands, such that parsing can be avoided. The underlying idea is that in a particular time window the same operands are likely to be used again (*).
This implementation could probably be improved too performance-wise.
Important: One notable thing missing is that results are not stored in the LRU cache yet.
(*) The question is whether this is true in real-world use-cases. If this is not true most of the time, then a patch like this will cause a performance degradation because maintaining the LRU also has some overhead.