-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CharLS should only use the valid bits from the passed input buffer #26
Comments
Note: the actual solution is going in the direction to apply a bitmask and only use the bits that need to be used and not rely on the fact that the caller ensure that all unused bits are set to zero. |
Out of range samples could be a consequence of signed data. We have opened a discussion on the dicom forum about the near-lossless encoding problem. I think there are two problems:
As a result, I think that the conversion of signed data to lossy should be banned. However, this problem is outside the scope of the CharLS library. |
Commit 6a30e19 adds support to ensure CharLS only uses the bits that are valid from the input buffer. |
During the encoding process the caller needs to pass the bitPerSample parameter and the buffer with the samples to encode. CharLS will not check if the samples are in the correct range, example:
bitPerSample = 10 (max value = 1014) and passing a sample of 2000.
In the debug build a ASSERT deep down will trigger that a computation failed. As most users use the release build they silently generate bad encoded bit streams.
The text was updated successfully, but these errors were encountered: