Replies: 1 comment 1 reply
-
This is awesome 👍 I did some benchmarks and we see 20% speed-up for real-word queries with (higher is better) The benchmark runs something like 3 columns on the very left are from our pure in-memory store (basically JS Maps and Arrays). Just as a baseline. Middle 3 is Various series within column show variations of the same query:
Also the result of each query is written to disk, so store and index is not the only factor in play, but we needed to see the impact close to the real world. Pretty impressive results in comparison to in-memory store. Esp. happy to see counts improved so much. Looking forward to |
Beta Was this translation helpful? Give feedback.
-
This release includes a significant overhaul of how get and query/range retrievals are performed to reduce the amount of native calls and the number of JS objects that have to be transmitted across the native calls. This also facilitates and accompanies adding support for V8's new fast-api-calls feature, which also improves performance. This can yield over a 50% performance improvement in get operations, and over twice the speed for iterating through ranges, with small payloads where deserialization is not the dominant cost. There are also very large improvements in the performance of count operations and offset handling as well.
Also related to these changes is switching (back) to serializing and deserializing keys in JS (instead of C++), which reduces the number of JS value/objects cross the native call barrier. This also opens the door for more customized JS key serialization strategies (negation, custom UUID handlers, and potentially more performant little endian format).
This discussion was created from the release Faster gets and query/range retrieval.
Beta Was this translation helpful? Give feedback.
All reactions