You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To Reproduce
Pass a large i64 value through the decoder as demonstrated in this commit. The converted value will be slightly smaller. In this example to create a breaking test, I passed 1627668684594000000 and the resulting value came out as 1627668684593999872 - a difference of 128.
Expected behavior
The converted value should match the value passed to the decoder. In this case, the value in the created record batch should be 1627668684594000000.
Additional context
I found this bug while implementing timestamp support in kafka-delta-ingest and delta-rs. Valid nanosecond timestamps are on the critical path for us there. Also, I have an arrow-rs PR in place already to fix.
The text was updated successfully, but these errors were encountered:
Describe the bug
Large values for
i64
andu64
types are corrupted by the cast tof64
back toi64/u64
in the json decoder build_primitive_array method.To Reproduce
Pass a large
i64
value through the decoder as demonstrated in this commit. The converted value will be slightly smaller. In this example to create a breaking test, I passed1627668684594000000
and the resulting value came out as1627668684593999872
- a difference of128
.Expected behavior
The converted value should match the value passed to the decoder. In this case, the value in the created record batch should be
1627668684594000000
.Additional context
I found this bug while implementing timestamp support in kafka-delta-ingest and delta-rs. Valid nanosecond timestamps are on the critical path for us there. Also, I have an arrow-rs PR in place already to fix.
The text was updated successfully, but these errors were encountered: