-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgradeable ArrayBuffers #368
Comments
How would this handle unique symbols? Is there a fundamental difference with JSON serialization and string comparison? What would happen if you use a different schema between serialization and deserialization? |
Good question; don't have an immediate answer to that. It could be that symbols are not permitted as values, but that would limit functionality (such as storing WeakMap keys).
The ArrayBuffer representation wouldn't be JSON; it would be a sequence of values. This raises a question of whether a more ergonomic solution would be to say that the primitive can contain its context such that it can be upgraded without a schema. It would be much more flexible this way, but you'd lose the ability to perform random access.
Same type of thing as if you created a protobuf with one schema and deserialize it with a different schema. |
Maybe a better solution in this vein is to say that the primitive is a CBOR buffer (RFC 8949), and then we can support fully ergonomic data access operations, with the catch that they may need to walk the CBOR buffer. let input = { hello: "world", x: 100, y: true };
// Serialize to a primitive ArrayBuffer
let primitive = input.toCBOR();
// Deep equality
console.assert(primitive === input.toCBOR());
// Access a field (may require walking the CBOR :/)
console.log(primitive.hello) // "world" There are dozens of attempts at binary object representation formats; maybe we could choose one with more efficient random access. |
please support symbol in the initial proposal, it's very important in our custom serialization format |
I don't think reducing equality to byte comparison will ever be desirable. This will only work if string and bigint values are either copied into the structure (presumably as in this suggestion) or if they are interned on insertion. I don't think copying strings is desirable. If an application has a very large string value and it decides to make a record from it, it shouldn't duplicate that memory use. Implementations of string concatenation even tend to avoid copying (instead producing "ropes" consisting of the source strings). (As for the interning option, I think this is usually considered to be more of a performance burden than a gain, since it involves doing equality + hashing + global map manipulation on construction/free (lots of unconditional things) rather than simply doing equality on comparison). |
Possibly related: #134, #218
A potential direction this proposal could take, which would solve a new set of problems, could be "upgradeable ArrayBuffers":
Something sort-of like this...
What this achieves:
Some initial open questions:
The text was updated successfully, but these errors were encountered: