You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
When writing arrow binary columns to parquet, we create thousands of small ByteBuffer objects, and this leads to much of the writer time spent on allocating and dropping these objects.
Describe the solution you'd like
A ButeBuffer is backed by a ByteBufferPtr which is an alias for Arc<Vec<u8>> with similar abstractions on length and offset. If we were to create a single ByteBuffer from the Arrow data, we would reduce allocations down to 1, and then reuse this buffer when writing binary values.
Local experiments have shown reasonable improvements in the writer.
Describe alternatives you've considered
I considered slicing into the Arrow buffer directly, but the parquet::encoding::Encoding is inflexible to this approach.
Additional context
I noticed this while profiling code and trying to simplify how we write nested lists.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
When writing arrow binary columns to parquet, we create thousands of small
ByteBuffer
objects, and this leads to much of the writer time spent on allocating and dropping these objects.Describe the solution you'd like
A
ButeBuffer
is backed by aByteBufferPtr
which is an alias forArc<Vec<u8>>
with similar abstractions on length and offset. If we were to create a singleByteBuffer
from the Arrow data, we would reduce allocations down to 1, and then reuse this buffer when writing binary values.Local experiments have shown reasonable improvements in the writer.
Describe alternatives you've considered
I considered slicing into the Arrow buffer directly, but the
parquet::encoding::Encoding
is inflexible to this approach.Additional context
I noticed this while profiling code and trying to simplify how we write nested lists.
The text was updated successfully, but these errors were encountered: