Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dont call multiunzip when no stats #9220

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion datafusion/core/benches/sql_planner.rs
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,7 @@ fn criterion_benchmark(c: &mut Criterion) {
let sql = std::fs::read_to_string(format!("../../benchmarks/queries/{}.sql", q))
.unwrap();
c.bench_function(&format!("physical_plan_tpch_{}", q), |b| {
b.iter(|| logical_plan(&ctx, &sql))
b.iter(|| physical_plan(&ctx, &sql))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was disconnect between benchmark name (physical plan) and the function(logical plan), so I corrected this

});
}

Expand Down
9 changes: 7 additions & 2 deletions datafusion/core/src/datasource/listing/table.rs
Original file line number Diff line number Diff line change
Expand Up @@ -880,8 +880,13 @@ impl ListingTable {
.boxed()
.buffered(ctx.config_options().execution.meta_fetch_concurrency);

let (files, statistics) =
get_statistics_with_limit(files, self.schema(), limit).await?;
let (files, statistics) = get_statistics_with_limit(
files,
self.schema(),
limit,
self.options.collect_stat,
)
.await?;

Ok((
split_files(files, self.options.target_partitions),
Expand Down
10 changes: 8 additions & 2 deletions datafusion/core/src/datasource/statistics.rs
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,15 @@ use itertools::izip;
use itertools::multiunzip;

/// Get all files as well as the file level summary statistics (no statistic for partition columns).
/// If the optional `limit` is provided, includes only sufficient files.
/// Needed to read up to `limit` number of rows.
/// If the optional `limit` is provided, includes only sufficient files. Needed to read up to
/// `limit` number of rows. `collect_stats` is passed down from the configuration parameter on
/// `ListingTable`. If it is false we only construct bare statistics and skip a potentially expensive
/// call to `multiunzip` for constructing file level summary statistics.
pub async fn get_statistics_with_limit(
all_files: impl Stream<Item = Result<(PartitionedFile, Statistics)>>,
file_schema: SchemaRef,
limit: Option<usize>,
collect_stats: bool,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the function is called 'get_statisticsand now there is a boolean flag that implies the stats aren't collected, can we please add some documentation to the docstring explaining what thecollect_stats` flag does?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes good catch, will do.

) -> Result<(Vec<PartitionedFile>, Statistics)> {
let mut result_files = vec![];
// These statistics can be calculated as long as at least one file provides
Expand Down Expand Up @@ -78,6 +81,9 @@ pub async fn get_statistics_with_limit(
while let Some(current) = all_files.next().await {
let (file, file_stats) = current?;
result_files.push(file);
if !collect_stats {
continue;
}

// We accumulate the number of rows, total byte size and null
// counts across all the files in question. If any file does not
Expand Down
Loading