-
Notifications
You must be signed in to change notification settings - Fork 810
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
parquet: improve BOOLEAN writing logic and report error on encoding fail #443
Conversation
When writing BOOLEAN data, writing more than 2048 rows of data will overflow the hard-coded 256 buffer set for the bit-writer in the PlainEncoder. Once this occurs, further attempts to write to the encoder fail, becuase capacity is exceeded, but the errors are silently ignored. This fix improves the error detection and reporting at the point of encoding and modifies the logic for bit_writing (BOOLEANS). The bit_writer is initially allocated 256 bytes (as at present), then each time the capacity is exceeded the capacity is incremented by another 256 bytes. This certainly resolves the current problem, but it's not exactly a great fix because the capacity of the bit_writer could now grow substantially. Other data types seem to have a more sophisticated mechanism for writing data which doesn't involve growing or having a fixed size buffer. It would be desirable to make the BOOLEAN type use this same mechanism if possible, but that level of change is more intrusive and probably requires greater knowledge of the implementation than I possess. resolves: apache#349
parquet/src/encodings/encoding.rs
Outdated
if self.bw_bytes_written + values.len() >= self.bit_writer.capacity() { | ||
self.bit_writer.extend(256); | ||
} | ||
T::T::encode(values, &mut self.buffer, &mut self.bit_writer)?; | ||
self.bw_bytes_written += values.len(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to add a comment myself! :-)
I just realised that I only want to do this checking if the encoding is for a Boolean, otherwise it's wasted work/memory. I'll think of the best way to achieve that.
Tacky, but I can't think of better way to do this without specialization.
Remove the byte tracking from the PlainEncoder and use the existing bytes_written() method in BitWriter. This is neater.
Ok. I'm finished poking this now. I've isolated the changes required to 2 files and eliminated the original runtime impact from the PlainEncoder. |
Thanks for the contribution @garyanaplan ! I will try and review this carefully tomorrow. |
Codecov Report
@@ Coverage Diff @@
## master #443 +/- ##
==========================================
- Coverage 82.71% 82.65% -0.06%
==========================================
Files 163 165 +2
Lines 44795 45556 +761
==========================================
+ Hits 37051 37655 +604
- Misses 7744 7901 +157
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you again @garyanaplan. I reviewed this logic carefully and it seems reasonable to me. I think it would be good if someone more familiar with this code (@sunchao or @nevi-me could also look at the approach).
Is there any way to provide a test for this code (aka the reproducer from https://github.com/apache/arrow-rs/issues/349>?
As the core issue appears to be that the return value if put_value
wasn't being checked, I wondered if there were more places where the return value isn't checked, and it seems there may be:
/Users/alamb/Software/arrow-rs/parquet/src/data_type.rs
668: if !bit_writer.put_value(*value as u64, 1) {
/Users/alamb/Software/arrow-rs/parquet/src/encodings/encoding.rs
602: self.bit_writer.put_value(packed_value, bit_width);
607: self.bit_writer.put_value(0, bit_width);
/Users/alamb/Software/arrow-rs/parquet/src/encodings/levels.rs
118: if !encoder.put_value(*value as u64, bit_width as usize) {
/Users/alamb/Software/arrow-rs/parquet/src/encodings/rle.rs
265: .put_value(self.buffered_values[i], self.bit_width as usize);
for value in values { | ||
bit_writer.put_value(*value as u64, 1); | ||
if !bit_writer.put_value(*value as u64, 1) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since put_value
returns false if there isn't enough space, you might be able to avoid errors with something like:
for value in values {
if !bit_writer.put_value(*value as u64, 1) {
bit_writer.extend(256)
bit_writer.put_value(*value as u64, 1)
}
}
Rather than returning an error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, we can either do this or make sure up front that there's enough capacity to write. One minor concern is putting the if branch inside the for loop might hurt the performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found it hard to think of a good way to test this with the fix in place.
I preferred the "don't auto expand memory at the point of failure" approach because I'm fairly conservative and didn't want to make a change that was too wide in impact without a better understanding of the code. i.e.: my fix specifically targeted the error I reported and made it possible to detect in other locations.
I think a better fix would be to (somehow) pre-size the vector or avoid having to size a vector for all the bytes that could be written, but that would be a much bigger scope to the fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
leaving the code as is seems fine to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch @garyanaplan!
parquet/src/data_type.rs
Outdated
@@ -661,8 +661,15 @@ pub(crate) mod private { | |||
_: &mut W, | |||
bit_writer: &mut BitWriter, | |||
) -> Result<()> { | |||
if bit_writer.bytes_written() + values.len() >= bit_writer.capacity() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems here values.len
is the number of bits to be written? should we use values.len() / 8
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this calculation is entirely in terms of bytes, so units should all be correct as is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm I'm sorry but can you elaborate why the unit of values
is also byte?
for value in values { | ||
bit_writer.put_value(*value as u64, 1); | ||
if !bit_writer.put_value(*value as u64, 1) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, we can either do this or make sure up front that there's enough capacity to write. One minor concern is putting the if branch inside the for loop might hurt the performance.
@garyanaplan what would you say to using the reproducer from #349 to test this issue? I realize like it probably seems unnecessary for such a small code change, but the amount of effort that went into the reproducer was significant and I would hate to have some future optimization reintroduce the bug again. If you don't have time, I can try to make time to create the test |
The problem with writing an effective test is that the error was only detected on file read and the read behaviour was to hang indefinitely. Taken together, those characteristics of the problem make crafting an effective test difficult. To be effective a test would need to write > 2048 boolean values to a test file, then read that file and not hang. I can think of ways to do that with a timeout and assume that if the read doesn't finish within timeout, then it must have failed. Such a test would rely on multi-threaded or async testing for co-ordination. I don't think there's any async stuff in parquet yet, so multi-threaded test would be required. I'll knock something up and push it to this branch. |
@garyanaplan I don't think we need to do anything special for timeouts -- between the default rust test runner and github ci action timeouts any test that hangs indefinitely will cause a failure (not run successfully) |
The test ensures that we can write > 2048 rows to a parquet file and that when we read the data back, it finishes without hanging (defined as taking < 5 seconds). If we don't want that extra complexity, we could remove the thread/channel stuff and just try to read the file and let the test runner terminate hanging tests.
I'd already written the test, just been in meetings. If we'd rather rely on the test framework to terminate hanging tests, just remove the thread/mpsc/channel stuff and do a straight read after verifying the write looks ok. |
println!("finished reading"); | ||
if let Ok(()) = sender.send(true) {} | ||
}); | ||
assert_ne!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also check assert_eq!(Ok(true), receiver.recv_timeout(Duration::from_millis(5000))
as well
However, I think that is equivalent to what you have here. 👍 thank you
Either way is fine with me. Thank you @garyanaplan |
The values.len() reports the number of values to be encoded and so must be divided by 8 (bits in a bytes) to determine the effect on the byte capacity of the bit_writer.
Which issue does this PR close?
Closes #349 .
Rationale for this change
When writing BOOLEAN data, writing more than 2048 rows of data will
overflow the hard-coded 256 buffer set for the bit-writer in the
PlainEncoder. Once this occurs, further attempts to write to the encoder
fail, because capacity is exceeded but the errors are silently ignored.
This fix improves the error detection and reporting at the point of
encoding and modifies the logic for bit_writing (BOOLEANS). The
bit_writer is initially allocated 256 bytes (as at present), then each
time the capacity is exceeded the capacity is incremented by another
256 bytes.
This certainly resolves the current problem, but it's not exactly a
great fix because the capacity of the bit_writer could now grow
substantially.
Other data types seem to have a more sophisticated mechanism for writing
data which doesn't involve growing or having a fixed size buffer. It
would be desirable to make the BOOLEAN type use this same mechanism if
possible, but that level of change is more intrusive and probably
requires greater knowledge of the implementation than I possess.
What changes are included in this PR?
(see above)
Are there any user-facing changes?
No, although they may encounter the encoding error now which was silently ignored previously.