-
Notifications
You must be signed in to change notification settings - Fork 1.8k
FIX : some benchmarks are failing #15367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
alamb
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this @getChan
| |b| b.iter(|| run(distinct_trace_id_100_partitions_100_000_samples_limit_100.0.clone(), | ||
| distinct_trace_id_100_partitions_100_000_samples_limit_100.1.clone())), | ||
| |b| b.iter(|| { | ||
| let rt = Runtime::new().unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this means that the benchmark will include the time to create each tokio runtime (with a bunch of threads, etc)
To avoid this I think you can create the runtime once, and then use it for each tieration:
let rt = Runtime::new().unwrap();
c.bench_function(
format!("distinct query with {} partitions and {} samples per partition with limit {}", partitions, samples, limit).as_str(),
|b| b.iter(|| {
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. Changed the runtime to be shared between loops.
However, it seems that other benchmark codes still have runtime creation within the iteration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Filed as #15507
c7414b2 to
c36e781
Compare
|
Thank you for the fix! I noticed there are two other tests panicked on the same line of source code, is this fix still applicable?
|
@2010YOUY01 |
alamb
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
Let's get this in and keep iterating on the benchmarks |
* distinct_query_sql, topk_aggregate * cargo clippy * cargo fmt * share runtime
Which issue does this PR close?
Rationale for this change
It is not certain, but it seems that plan creation and
collect()should share the same runtime.. It is presumed that the issue occurred because RepartitionExec lazily polls within the runtime.RepartitionExec::execute#10009I will add more details if I find anything additional.
What changes are included in this PR?
Runtime::new()intobench_functionctx.sql()) andcollect()share the same runtime.Are these changes tested?
yes. below test are succeded
cargo bench -p datafusion --bench topk_aggregatecargo bench -p datafusion --bench distinct_query_sqlAre there any user-facing changes?
No.