Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MINOR]: Unknown input statistics in FilterExec #7544

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions datafusion/common/src/stats.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@

use std::fmt::Display;

use arrow::datatypes::SchemaRef;

use crate::ScalarValue;

/// Statistics for a relation
Expand Down Expand Up @@ -70,3 +72,29 @@ pub struct ColumnStatistics {
/// Number of distinct values
pub distinct_count: Option<usize>,
}

impl ColumnStatistics {
/// Returns the [`Vec<ColumnStatistics>`] corresponding to the given schema by assigning
/// infinite bounds to each column in the schema. This is useful when even the input statistics
/// are not known, as the current executor can shrink the bounds of some columns.
pub fn new_with_unbounded_columns(schema: SchemaRef) -> Vec<ColumnStatistics> {
let data_types = schema
.fields()
.iter()
.map(|field| field.data_type())
.collect::<Vec<_>>();

data_types
.into_iter()
.map(|data_type| {
let dt = ScalarValue::try_from(data_type.clone()).ok();
ColumnStatistics {
null_count: None,
max_value: dt.clone(),
min_value: dt,
distinct_count: None,
}
})
.collect()
}
}
46 changes: 44 additions & 2 deletions datafusion/core/src/physical_plan/filter.rs
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ impl ExecutionPlan for FilterExec {
let input_stats = self.input.statistics();
let input_column_stats = match input_stats.column_statistics {
Some(stats) => stats,
None => return Statistics::default(),
None => ColumnStatistics::new_with_unbounded_columns(self.schema()),
};

let starter_ctx =
Expand All @@ -208,7 +208,7 @@ impl ExecutionPlan for FilterExec {
Err(_) => return Statistics::default(),
};

let selectivity = analysis_ctx.selectivity.unwrap_or(1.0);
let selectivity = analysis_ctx.selectivity.unwrap();

let num_rows = input_stats
.num_rows
Expand Down Expand Up @@ -977,4 +977,46 @@ mod tests {

Ok(())
}

#[tokio::test]
async fn test_empty_input_statistics() -> Result<()> {
let schema = Schema::new(vec![Field::new("a", DataType::Int32, false)]);
let input = Arc::new(StatisticsExec::new(Statistics::default(), schema));
// WHERE a <= 10 AND 0 <= a - 5
let predicate = Arc::new(BinaryExpr::new(
Arc::new(BinaryExpr::new(
Arc::new(Column::new("a", 0)),
Operator::LtEq,
Arc::new(Literal::new(ScalarValue::Int32(Some(10)))),
)),
Operator::And,
Arc::new(BinaryExpr::new(
Arc::new(Literal::new(ScalarValue::Int32(Some(0)))),
Operator::LtEq,
Arc::new(BinaryExpr::new(
Arc::new(Column::new("a", 0)),
Operator::Minus,
Arc::new(Literal::new(ScalarValue::Int32(Some(5)))),
)),
)),
));
let filter: Arc<dyn ExecutionPlan> =
Arc::new(FilterExec::try_new(predicate, input)?);
let filter_statistics = filter.statistics();

let expected_filter_statistics = Statistics {
num_rows: None,
total_byte_size: None,
column_statistics: Some(vec![ColumnStatistics {
null_count: None,
min_value: Some(ScalarValue::Int32(Some(5))),
max_value: Some(ScalarValue::Int32(Some(10))),
distinct_count: None,
}]),
is_exact: false,
};
assert_eq!(filter_statistics, expected_filter_statistics);

Ok(())
}
}
3 changes: 2 additions & 1 deletion datafusion/physical-expr/src/analysis.rs
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,8 @@ fn shrink_boundaries(
&final_result.upper.value,
&target_boundaries,
&initial_boundaries,
)?;
)
.unwrap_or(1.0);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this implementation, columns with infinite bounds are not handled, so it was returning an error, and the newly calculated bounds could not be set. This is a kind of workaround but I will refactor these parts with the context of this issue


if !(0.0..=1.0).contains(&selectivity) {
return internal_err!("Selectivity is out of limit: {}", selectivity);
Expand Down
25 changes: 13 additions & 12 deletions datafusion/sqllogictest/test_files/subquery.slt
Original file line number Diff line number Diff line change
Expand Up @@ -284,19 +284,20 @@ Projection: t1.t1_id, __scalar_sq_1.SUM(t2.t2_int) AS t2_sum
------------TableScan: t2 projection=[t2_id, t2_int]
physical_plan
ProjectionExec: expr=[t1_id@0 as t1_id, SUM(t2.t2_int)@1 as t2_sum]
--CoalesceBatchesExec: target_batch_size=8192
----HashJoinExec: mode=Partitioned, join_type=Left, on=[(t1_id@0, t2_id@1)]
------CoalesceBatchesExec: target_batch_size=8192
--------RepartitionExec: partitioning=Hash([t1_id@0], 4), input_partitions=4
----------MemoryExec: partitions=4, partition_sizes=[1, 0, 0, 0]
------ProjectionExec: expr=[SUM(t2.t2_int)@1 as SUM(t2.t2_int), t2_id@0 as t2_id]
--ProjectionExec: expr=[t1_id@2 as t1_id, SUM(t2.t2_int)@0 as SUM(t2.t2_int), t2_id@1 as t2_id]
----CoalesceBatchesExec: target_batch_size=8192
------HashJoinExec: mode=Partitioned, join_type=Right, on=[(t2_id@1, t1_id@0)]
--------ProjectionExec: expr=[SUM(t2.t2_int)@1 as SUM(t2.t2_int), t2_id@0 as t2_id]
----------CoalesceBatchesExec: target_batch_size=8192
------------FilterExec: SUM(t2.t2_int)@1 < 3
--------------AggregateExec: mode=FinalPartitioned, gby=[t2_id@0 as t2_id], aggr=[SUM(t2.t2_int)]
----------------CoalesceBatchesExec: target_batch_size=8192
------------------RepartitionExec: partitioning=Hash([t2_id@0], 4), input_partitions=4
--------------------AggregateExec: mode=Partial, gby=[t2_id@0 as t2_id], aggr=[SUM(t2.t2_int)]
----------------------MemoryExec: partitions=4, partition_sizes=[1, 0, 0, 0]
--------CoalesceBatchesExec: target_batch_size=8192
----------FilterExec: SUM(t2.t2_int)@1 < 3
------------AggregateExec: mode=FinalPartitioned, gby=[t2_id@0 as t2_id], aggr=[SUM(t2.t2_int)]
--------------CoalesceBatchesExec: target_batch_size=8192
----------------RepartitionExec: partitioning=Hash([t2_id@0], 4), input_partitions=4
------------------AggregateExec: mode=Partial, gby=[t2_id@0 as t2_id], aggr=[SUM(t2.t2_int)]
--------------------MemoryExec: partitions=4, partition_sizes=[1, 0, 0, 0]
----------RepartitionExec: partitioning=Hash([t1_id@0], 4), input_partitions=4
------------MemoryExec: partitions=4, partition_sizes=[1, 0, 0, 0]

query II rowsort
SELECT t1_id, (SELECT sum(t2_int) FROM t2 WHERE t2.t2_id = t1.t1_id having sum(t2_int) < 3) as t2_sum from t1
Expand Down