Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NFC] BenchmarkRunner: always populate *_report_aggregates_only bools. #708

Merged

Conversation

LebedevRI
Copy link
Collaborator

It is better to let the RunBenchmarks(), report() decide
whether to actually only output aggregates or not,
depending on whether there are actually aggregates.

It is better to let the RunBenchmarks(), report() decide
whether to actually *only* output aggregates or not,
depending on whether there are actually aggregates.
@AppVeyorBot
Copy link

Build benchmark 1532 completed (commit 823fb4aa64 by @LebedevRI)

@dmah42
Copy link
Member

dmah42 commented Oct 18, 2018

I'm missing the subtlety here. why is it ok to remove the results count check?

@LebedevRI
Copy link
Collaborator Author

LebedevRI commented Oct 18, 2018

It's subtle indeed.

Previously, BenchmarkRunner() always said that "if there are no repetitions,
then you should never output only the repetitions". And the report() simply assumed
that the report_aggregates_only bool it received makes sense, and simply used it.

Now, the logic is the same, but the blame has shifted.
BenchmarkRunner() always propagates what those benchmarks would have wanted
to happen wrt the aggregates. And the report() lambda has to actually consider
both the report_aggregates_only bool, and it's meaningfulness.

To put it in the context of the patch series - if the repetition count was 1,
but *_report_aggregates_only was set to true, and we capture each iteration separately,
then we will compute the aggregates, but then output everything, both the iteration,
and aggregates, despite *_report_aggregates_only being set to true.

Did that make sense?

@LebedevRI
Copy link
Collaborator Author

It's basically the same problem as the cause for the #707 rename.

@LebedevRI
Copy link
Collaborator Author

Thank you for the review!

@LebedevRI LebedevRI merged commit 99d1356 into google:master Oct 18, 2018
@LebedevRI LebedevRI deleted the BenchmarkRunner-always-populate-bools branch October 18, 2018 12:09
EricWF pushed a commit to efcs/benchmark that referenced this pull request Nov 29, 2018
google#708)

It is better to let the RunBenchmarks(), report() decide
whether to actually *only* output aggregates or not,
depending on whether there are actually aggregates.

It's subtle indeed.

Previously, `BenchmarkRunner()` always said that "if there are no repetitions,
then you should never output only the repetitions". And the `report()` simply assumed
that the `report_aggregates_only` bool it received makes sense, and simply used it.

Now, the logic is the same, but the blame has shifted.
`BenchmarkRunner()` always propagates what those benchmarks would have wanted
to happen wrt the aggregates. And the `report()` lambda has to actually consider
both the `report_aggregates_only` bool, and it's meaningfulness.

To put it in the context of the patch series - if the repetition count was `1`,
but `*_report_aggregates_only` was set to `true`, and we capture each iteration separately,
then we will compute the aggregates, but then output everything, both the iteration,
and aggregates, despite `*_report_aggregates_only` being set to `true`.
JBakamovic pushed a commit to JBakamovic/benchmark that referenced this pull request Dec 6, 2018
google#708)

It is better to let the RunBenchmarks(), report() decide
whether to actually *only* output aggregates or not,
depending on whether there are actually aggregates.

It's subtle indeed.

Previously, `BenchmarkRunner()` always said that "if there are no repetitions,
then you should never output only the repetitions". And the `report()` simply assumed
that the `report_aggregates_only` bool it received makes sense, and simply used it.

Now, the logic is the same, but the blame has shifted.
`BenchmarkRunner()` always propagates what those benchmarks would have wanted
to happen wrt the aggregates. And the `report()` lambda has to actually consider
both the `report_aggregates_only` bool, and it's meaningfulness.

To put it in the context of the patch series - if the repetition count was `1`,
but `*_report_aggregates_only` was set to `true`, and we capture each iteration separately,
then we will compute the aggregates, but then output everything, both the iteration,
and aggregates, despite `*_report_aggregates_only` being set to `true`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants