-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
math: benchmark big.INT operations #15029
Comments
The biggest issue with the current implementation is the truly excessive amount of heap allocation we do. Barely anything is kept on the stack, due to the pointer indirection involved, and amount of new ints allocated. In place operations help significantly here. Also having more internal flexibility / optimization on the max size bound is likely important. To compare to something fixed precision, e.g. uint256, you should compare performance for single word sized arithmetic, and 4 word arithmetic. Ideally what we'd see is that for single word in-place operations, the current outperforms theres. (If not, theres a problem) Then we could think of tricks to get these to not be new heap allocations in the average case. We'd certainly see that 4 word in-place arithmetic in the current approach is slower than theirs. (Somewhat fundamental) Ideally we'd see generics, like https://github.com/arkworks-rs/algebra/tree/master/ff/src/biginteger be used to allow flexibility between word size, while maintaining high perf. (With the edge case of <= 4 words always outperforming without generics overriding that parameter) |
IIRC, Go will allocate objects onto the heap under 3 primary conditions:
Given this, what ideas do you have to ensure we use the stack more effectively? |
we should write some benchmarks and identify hot spots in relation to our implementation. This will probably lead to the best first steps |
Should get reproducable benchmarks to verify this, but my experience with prior benchmarks is that the is mostly all the heap allocation and GC overhead work. The actually big.Int performance is pretty small compared to the time spent in heap allocating just to create an object. The mutative operations getting used internally is at least a significant aid in this. Every object here has an address taken, because we use pointers to bigints, and the bigint itself contains a slice underneath. The underlying slice is potentially small vec optimized onto stack: https://medium.com/@yulang.chu/go-stack-or-heap-2-slices-which-keep-in-stack-have-limitation-of-size-b3f3adfd6190 , but the pointer indirection in the Int & Decimal is not. We see this in cosmos math heavy workloads causing massive amount of heap allocs Theres a few big directions from the heap claim, for speed improvements:
|
@ValarDragon @tac0turtle okay hear me out: https://github.com/holiman/uint256 It would make sdkmath and sdkcoins speedy fast. No-one needs anything bigger than 256 and we could always create our own custom bigInt that is 2 uint256's if someone REALLY needs something larger than 256. |
it is worth considering doing something else since
|
@tac0turtle the context for that go 1.20 announcement AFAIU is:
|
Might we have related issue:
And
|
Same with @alpha-omega-labs
I'm working on Gaia v13.0.2 which downloaded from https://github.com/cosmos/gaia/releases/tag/v13.0.2 |
@linhvt-teko did you ever resolve/overcome this issue? I see something very similar |
@angusscott I've no idea about this problem. So I reinstalled everything and downloaded a snapshot from Polkachu. You can try these steps:
|
Summary
in math we use Big.INT for larger than integers. We never truly benchmarked the implementation against something like https://github.com/decred/dcrd/blob/master/math/uint256/README.md or rolling our own implementation.
Proposal
Benchmark big.INT against a library like dcreds, if we identify that big.INT operations are slow we should come up with a plan with how to optimise the operations
The text was updated successfully, but these errors were encountered: