You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The encode and decode time per element increases as the number of input elements grows. Looking at the output from a benchmark, it looks like it's mostly linear or super linear growth up to 16k elements.
Possible optimizations should be explored for use cases that use larger input sizes. This library uses a generic base-n decoder, so perhaps there are specific base58 optimizations. It might be good to also document this issue in the README so there are not performance surprises compared to simpler base16 and base64 codecs.
These graphs make it look like the algorithm is linear or super-linear and not quadratic (which it is at much larger values). You might try a graph that shows time per element and show a zoom into 16KB, and then a larger graph out to 16MB to show how the larger numbers go quadratic if there are a lot of carries.
There is some confusion about the above graph. Some more details:
Above is the per element time. Ideally the slope of this would be zero so the algorithm total time would just depend on dataset size.
Below are graphs for the total dataset time. These are zooms of the same data for 1-512, 1-1k, 1-4k, and 1-16k.
Note for example that an encode of 128B takes ~109us, but 16KB takes 1.7s!
The times for this are on a ~2012 Intel Core i7 2GHz. Absolute times for newer processors would be better, but the scalability issues would be similar.
The benchmark just uses one random dataset. The base58 algorithm might change these results depending on the input data values.
The encode and decode time per element increases as the number of input elements grows. Looking at the output from a benchmark, it looks like it's mostly linear or super linear growth up to 16k elements.
Possible optimizations should be explored for use cases that use larger input sizes. This library uses a generic base-n decoder, so perhaps there are specific base58 optimizations. It might be good to also document this issue in the README so there are not performance surprises compared to simpler base16 and base64 codecs.
Some references (thanks to @msporny):
https://stackoverflow.com/questions/28418332/computational-complexity-of-base-conversion
https://cs.stackexchange.com/questions/21736/time-complexity-of-base-conversion
Benchmark and output:
#5
t1-16k.csv
The text was updated successfully, but these errors were encountered: