You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we take a naive approach of a payload (either a value for cache or a message for pubsub) having a fixed number of random bytes to achieve a target compression ratio. These bytes are grouped at the head of the payload (ignoring the additional header we pack into pubsub messages to track timing and integrity).
We should do some analysis to determine if the entropy needs to be spread throughout the payload, either by shuffling the bytes, or using a reduced set of symbols, or ...
It would be interesting to know if this even effects the compression/decompression overheads in an appreciable way for expected value/message sizes.
The text was updated successfully, but these errors were encountered:
Currently, we take a naive approach of a payload (either a value for cache or a message for pubsub) having a fixed number of random bytes to achieve a target compression ratio. These bytes are grouped at the head of the payload (ignoring the additional header we pack into pubsub messages to track timing and integrity).
We should do some analysis to determine if the entropy needs to be spread throughout the payload, either by shuffling the bytes, or using a reduced set of symbols, or ...
It would be interesting to know if this even effects the compression/decompression overheads in an appreciable way for expected value/message sizes.
The text was updated successfully, but these errors were encountered: