You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
They seem to use two parameters <size_t copy_amount, bool use_shuffle>, while copy_amount are 8, 16 and 32.
lz4_flex uses a wild_copy of 16 currently, but depending on the architecture, more or less might be good.
wild_copy will copy blocks of 16 bytes, event though the actual data required is less. That's possible when we are far enough from the end of the data (most of the time). https://github.com/PSeitz/lz4_flex/blob/main/src/block/decompress.rs#L35-L46
ClickHouse has implemented a fast lz4 decompress method using
performance statistics
.Maybe it's a good approach to have it in
lz4_flex
crate.The text was updated successfully, but these errors were encountered: