You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are a number of APIs where it makes sense to allow storage of arbitrary content-supplied data and structured serialization is the logical choice. In many of these cases, it makes sense to bound the data used because 1) the "normal" usage is expected to be small and reasonable and 2) there's a high chance that the data needs to be synchronously available and so will be stored in memory at all times, potentially multiple times.
Currently the structured serialization algorithm specifies what data gets encoded, but the underlying byte encoding is not specified and is appropriately implementation-dependent. Using the underlying implementation-specific buffer sizes would lead to potential browser compatibility issues where a data payload that was under the limit in one browser might throw in another browser.
So I propose that we augment the structured serialization algorithm so that it optionally computes a canonical serialized length that grows in rough proportion to the growth of included data and the underlying implementation-specific encoding. The key feature is cross-browser consistency, not being close to actual implementation usages.
This (establishing a limit) has come up previously in discussion around:
There are a number of APIs where it makes sense to allow storage of arbitrary content-supplied data and structured serialization is the logical choice. In many of these cases, it makes sense to bound the data used because 1) the "normal" usage is expected to be small and reasonable and 2) there's a high chance that the data needs to be synchronously available and so will be stored in memory at all times, potentially multiple times.
Currently the structured serialization algorithm specifies what data gets encoded, but the underlying byte encoding is not specified and is appropriately implementation-dependent. Using the underlying implementation-specific buffer sizes would lead to potential browser compatibility issues where a data payload that was under the limit in one browser might throw in another browser.
So I propose that we augment the structured serialization algorithm so that it optionally computes a canonical serialized length that grows in rough proportion to the growth of included data and the underlying implementation-specific encoding. The key feature is cross-browser consistency, not being close to actual implementation usages.
This (establishing a limit) has come up previously in discussion around:
Existing APIs that use structured clone to add ad-hoc storage:
cc: @annevk
The text was updated successfully, but these errors were encountered: