perf(fromArrayBuffer): use less memory for large buffers #242
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Platforms affected
All
Motivation and Context
This improves performance for converting large files to Base64 strings (see apache/cordova-plugin-file#364 for more info, especially the extensive performance analysis by @LightMind).
Another win here, is that we reduce the code size considerably. A downside is that we actually loose a bit of performance for small instances, but I do not think that this would be noticeable.
I explored different approaches here, like converting the original algorithm to use ArrayBuffers. The problem here is that we still need to convert the end result to a string which then becomes a performance bottleneck if you do not have support for
TextDecoder
(which we cannot assume with our current ES5 target).Fixes #241
Description
base64.fromArrayBuffer
now usesbtoa
to convert bytes to a base64 encoded string. We already use its counterpartatob
inbase64.toArrayBuffer
. Sincebtoa
unfortunately operates on binary strings instead of buffers, we first need to convert the raw bytes to a binary string. This is the main performance bottleneck here, but applyingString.fromCharCode
to large chunks of data works reasonably well.Testing
I added a test that should hopefully preventing people from making stupid changes in the future. However, its reliance on an absolute expected runtime might lead to problems in the future.
I also did some performance comparisons (ops/s) between the new and old version in my local Chrome browser: