This repository has been archived by the owner on Feb 18, 2024. It is now read-only.
Refactored parquet statistics deserialization #962
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR modifies the APIs around parquet statistics to make it easier for consumers to use them.
Background
Parquet statistics are stored on a per-row group basis, as two values (min,max) per (group, column chunk).
We currently deserialize them to arrow so that arrow-dedicated operators can be used on them based on their logical type, e.g. for filter pushdown. However, we do it on a row group per row group basis (i.e. one value per row group).
This has 3 problems:
This PR
This PR refactors the whole parquet statistics API to leverage arrow arrays. The core ideas are:
For example, a single row group with a column of int64 has
Statistics
has the following invariants:null_count.len() == distinct_count.len() == min_value.len() == max_value.len()
min_value.data_type() == max_value.data_type()
The idea is that consumers can easily compute row group pruning via the
min_value
andmax_value
.Note that
Count
can either beUInt64Array
orStructArray
, since Struct arrays have null and distinct counts for each of its fields