-
Notifications
You must be signed in to change notification settings - Fork 416
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error optimizing large table #1419
Comments
delta-rs/rust/src/operations/optimize.rs Lines 325 to 330 in b17f286
|
From first glance, if the writer buffer exceeds the Is the check |
Without having checked the code, in principle the bin packing should bin files such that they yield a single resulting file. My guess would be that either we have a bug in in the binning logic, or written file size is not as predictable as assumed.. Are we e.g. using the same compression during initial write and optimization? |
I think the problem is we don't set compression by default when optimizing. Also written file size can be unpredictable. |
I'll try to address this in my PR #1383. |
@wjones127 were you able to address this in the aforementioned PR or is it still outstanding? |
It will be addressed in that PR. |
Closing this as it seems to be resolved in #1383 |
Environment
Delta-rs version: 0.12
Binding: rust
Environment:
Bug
What happened:
Issuing optimize on a large table failed with
What you expected to happen:
Optimize to succeed
How to reproduce it:
Call
DeltaOps::from(table).optimize().await.unwrap();
on a table where the writer buffer would be filled when reading part of a source file.More details:
Table consists of 37040 commits and ~75k files.
The text was updated successfully, but these errors were encountered: