-
Notifications
You must be signed in to change notification settings - Fork 909
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: Use charCodeAt instead of TextEncoder for improved UTF string comparison performance. #8778
Comments
I couldn't figure out how to label this issue, so I've labeled it for a human to triage. Hang tight. |
Hi @amiller-gh, thanks for submitting such a detailed issue, and the PR! It's very much appreciated. We are looking into this. This issue was introduced in 11.3.0 last week (721e5a7), downgrading to 11.2.0 temporarily will resolve this until we publish the next release with a fix. |
Hi @amiller-gh, yes, UTF-8 encoding is introduced to be consistent with the backend string sorting. We are currently looking into more efficient ways (and hopefully without burdening the memory) to do UTF-8 encoding in the sdk. Sorry for the inconvenience. |
No worries, glad I could be helpful! Thank you both. |
Hey @amiller-gh, I just released 11.3.1. This release reverted the change that was causing this issue. Could you confirm that this issue no longer exists in 11.3.1? |
Hey @dlarocque, confirmed! The perf regression has disappeared. Thanks for the fast turnaround 🙏 This was a sub-set of a number of client side "bulk-import" issues that we've been wrestling with, largely related to how the Firestore server treats client rate limits vs firebase-admin rate limits. This client side bulk upload problem seems to be a not infrequent issue: #7655 (comment) Even just uploading 20k-50k documents from the client SDK (not that many for most "bulk import" systems), we run into Do you think there is an appetite to visit this internally, and can I start a new issue to discuss client vs admin rate limit differences in more detail? |
@amiller-gh Thanks for confirming that this issue is resolved! I'm sorry to hear you've been having other issues with Firestore. If you feel that this discussion is different than the one had in #7655, feel free to open a new issue. Other ways to draw more attention to this would be to report the issue to Firebase Support, or submit an idea to UserVoice |
Hi @amiller-gh, could you please help custom build and test this fix out? A lazy encoding is used instead of original "encode them all first" method. Hopefully, this should be able to do the UTF-8 encoded string comparison without dragging the performance down.
|
Operating System
MacOS Ventura 13.2.1
Environment (if applicable)
Node.js 18.19.1, Chrome 131.0.6778.266
Firebase SDK Version
11.3.0
Firebase SDK Product(s)
Firestore
Project Tooling
Browser react app, Node.js Express server, Electron app with React frontend.
Detailed Problem Description
We've recently migrated a number of systems from firebase-admin to the firebase client SDKs (internal browser-based dashboards, cloud functions, electron background process, etc) and noticed a major reduction in performance for batch Firestore write operations.
This is the flame graph for a 500 document batch write operation that takes ~868ms:

Unfortunately, we have some operations that need to run hundreds of thousands of writes, and repeat this 500 document batch write many times over. All of this time adds up significantly!
Zooming in a little, we saw that the culprit appears to be a TextEncoder.encode() call from a function called

compareUtf8Strings
that 1) takes some time to encode the characters, and 2) is quickly thrown out, causing a significant amount of garbage collection.The current implementation uses TextEncoder to compare strings as UTF-8 integers in byte order:
A very small change to use charCodeAt instead means we avoid creating thousands of new TextEncoder objects:
And reduce the runtime of this operation from 868ms to 66ms!
To get these perf improvements, this change introduces one small, but hopefully inconsequential, change to the way this comparator behaves:
TextEncoder
converts the string to aUInt8Array
, and compares by looking at the individual UTF-8 encoded bytes, butcharCodeAt
fetches the character as UTF-16, represented by a standard JS 64 bit float.On first blush, I don't see how this will introduce any practical changes to the return value of this function – please correct me if I'm wrong! I'm not sure if the library was comparing strings as UTF-8 for any particular reason (Mirroring underlying database behavior? Downstream perf implications?), but if not, this may be a very easy win for client side SDK performance.
I've opened up a PR with this change at #8779. I'm boarding a flight right now, so will make sure tests all pass still once I'm back the ground, but wanted to open up this issue to kick off discussion.
Steps and code to reproduce issue
See above.
The text was updated successfully, but these errors were encountered: