sensors: Fix overflow in default decoder #61186
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The default decoder would take the micro-unit value of the old sensor
value and multiply it by INT32_MAX. This would, at times, cause an
overflow for the int64_t which is the cause of some bugs like when
-7952 was used (-7952000000 * INT32_MAX < INT64_MIN). Instead the new
math converts:
value_u * INT32_MAX / ((1 << header->shift) * 1000000)
to a bitmap:
sample.val1
consumes the upperN
bitssample.val2 * BIT(32 - N) / 1000000
consumes the lower32-N
bits
This both improves the accuracy, and avoids the overflow since
shift
is guaranteed to be between 0 and 31.