You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently if a pipeline of codecs is used, it would be useful to track the intermediate sizes of the buffers as they are transformed so that enough scratch space can be allocated to decode data efficiently
The text was updated successfully, but these errors were encountered:
When it can be determined independent of the data, just based on e.g. shapes, implementations can already (and do already) do this. Some compression codecs also store the decoded size. In general, though, it might be necessary to store an additional size field along with the encoded data. Is that what you are proposing?
Indeed for the case of a single codec, this is unneeded
Think it comes up in cases when codecs are chained. For example using run-length encoding or bit packing followed by another compressor. It would be helpful to know what size the buffer between those steps is for decoding so as to allocate for it appropriately
Yeah this is what I'm thinking
It's possible the exact size of any intermediate buffers is not strictly necessary, but instead some largest intermediate buffer seen may be good enough. In this way it might be similar to Python's __length_hint__
Currently if a pipeline of codecs is used, it would be useful to track the intermediate sizes of the buffers as they are transformed so that enough scratch space can be allocated to decode data efficiently
The text was updated successfully, but these errors were encountered: