-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Epic: Statistics improvements #8227
Comments
Recently I began looking into implementing #10316, and the proposed approach was to add per-partition statistics to According to @alamb's comment on #8078, the (then) current state of the I see that work on this epic has stalled since February, is there interest in continuing it? If so, I'm a willing contributor, but it'd help to know what needs to be done first, in particular if the |
If we have per-partition statistics, merging them will be problematic for NDV. Extrapolation techniques are not likely to work. |
Ok, well I suppose we can keep the existing global statistics and add a new per-partition statistics method (that defaults to returning the global statistics for each partition). That would probably be a less invasive change too. Would be happy to discuss the details more over on #10316 |
Well, that may have also been my attempt / excuse :) -- especially if I didn't have enough time to work on it
I personally think we should go about this from the other end: try to implement the analysis for #10316 -- and use that as a vehicle to make any additional |
Here is one idea on how to improve Statistics / Precision. Let me know what you think: |
Is your feature request related to a problem or challenge?
We would like to use "statistics" in our project for transformations that rely on the statisics being "correct" (e.g. that the there are no values outside the
min
andmax
range).DataFusion has several optimizations like this too that rely on statistics being correct such as skipping file scans with limits such as in https://github.com/apache/arrow-datafusion/blob/e54894c39202815b14d9e7eae58f64d3a269c165/datafusion/core/src/datasource/statistics.rs#L34-L33. There are also suggestions of additional such optimizations like #6672
However the current Statistics code seems to make it hard to manage the 'are the statistics exact and can they be guaranteed for transformations' (@crepererum noted this quite some time ago on #5613). This has recently lead to several bugs such as
We would like to make it clearer what is known and what is an estimate is know (e.g. the min/max of row counts may be known, but the actual value may be an estimate after a filter). This is described in more detail on #8078
As we began exploring this concept we ran into several issues with Statistics and I think it is getting big enough to warrant its own tracking epic
Related items
show statistics
#8111ParquetExec::statistics()
does not read statistics for many column types (like timstamps, strings, etc) #8295ParquetExec::statistics::is_exact
likely wrong/misunderstood #5614Statistics::is_exact
semantics #5613num_rows
andtotal_byte_size
are not defined (stat should be None instead of Some(0)) #2976Pruning Improvements (maybe should be its own epic)
<col> = 'const'
inPruningPredicate
#8376Describe the solution you'd like
No response
Describe alternatives you've considered
No response
Additional context
This is somewhat related
The text was updated successfully, but these errors were encountered: