You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current wasm compressor produces an array of bitwise probabilities (count = input.length * inBits ) that is fed to ANSEncoder, but the optimization process only needs the number of bytes used so this can be massively simplified. The length of the first line is log_2(PROD_i { predictions[i] / 2^(precision + 1) }) plus the coding overhead, while the length of the second line is currently independent of the exact content of the first line.
I've experimentally implemented this and found that while it is indeed faster, the estimation was slightly off for an unknown reason---probably the coding overhead is not as insignificant. This can be a problem at the later stages of the optimization where each improvement is as small as a single byte. Whether this is relevant or not hasn't been yet investigated.
The text was updated successfully, but these errors were encountered:
The current wasm compressor produces an array of bitwise probabilities (count =
input.length * inBits
) that is fed toANSEncoder
, but the optimization process only needs the number of bytes used so this can be massively simplified. The length of the first line islog_2(PROD_i { predictions[i] / 2^(precision + 1) })
plus the coding overhead, while the length of the second line is currently independent of the exact content of the first line.I've experimentally implemented this and found that while it is indeed faster, the estimation was slightly off for an unknown reason---probably the coding overhead is not as insignificant. This can be a problem at the later stages of the optimization where each improvement is as small as a single byte. Whether this is relevant or not hasn't been yet investigated.
The text was updated successfully, but these errors were encountered: