You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
They're listed as strings in the spec, but it would seem highly inefficient to encode them that way. Are they in fact encoded as strings? (If not, you could encode them as LEB128 integers.)
The text was updated successfully, but these errors were encountered:
We have several binary encodings and we're still experimenting and tweaking them to improve compression and parse speed. The one with which we've been measuring parse speed does encode them as strings, which are themselves encoded as indices in the string table, as LEB32 integers. The tokenizer itself is optimized to perform LEB32 lookups instead of string lookups, so that's still quite fast and reasonably easy to compress.
We're also experimenting with encoding them as special interfaces, in a variant of the format which uses predictions on interfaces to improve compression, and this seems to observably decrease the size of the file. We haven't checked the impact on decompression speed.
They're listed as strings in the spec, but it would seem highly inefficient to encode them that way. Are they in fact encoded as strings? (If not, you could encode them as LEB128 integers.)
The text was updated successfully, but these errors were encountered: