Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compare.py: sort the results #1168

Merged
merged 1 commit into from
Jun 3, 2021
Merged

Conversation

LebedevRI
Copy link
Collaborator

Currently, the tooling just keeps the whatever benchmark order
that was present, and this is fine nowadays, but once the benchmarks
will be optionally run interleaved, that will be rather suboptimal.

So, now that i have introduced family index and per-family instance index,
we can define an order for the benchmarks, and sort them accordingly.

There is a caveat with aggregates, we assume that they are in-order,
and hopefully we won't mess that order up..

@google-cla google-cla bot added the cla: yes label Jun 3, 2021
tools/compare.py Outdated Show resolved Hide resolved
Currently, the tooling just keeps the whatever benchmark order
that was present, and this is fine nowadays, but once the benchmarks
will be optionally run interleaved, that will be rather suboptimal.

So, now that i have introduced family index and per-family instance index,
we can define an order for the benchmarks, and sort them accordingly.

There is a caveat with aggregates, we assume that they are in-order,
and hopefully we won't mess that order up..
@LebedevRI LebedevRI force-pushed the compare-py-sorting branch from 8d70c3b to b414e7f Compare June 3, 2021 11:46
@LebedevRI LebedevRI merged commit 6e32352 into google:main Jun 3, 2021
@LebedevRI LebedevRI deleted the compare-py-sorting branch June 3, 2021 13:22
dmah42 pushed a commit that referenced this pull request Jun 3, 2021
Currently, the tooling just keeps the whatever benchmark order
that was present, and this is fine nowadays, but once the benchmarks
will be optionally run interleaved, that will be rather suboptimal.

So, now that i have introduced family index and per-family instance index,
we can define an order for the benchmarks, and sort them accordingly.

There is a caveat with aggregates, we assume that they are in-order,
and hopefully we won't mess that order up..
LebedevRI added a commit to LebedevRI/benchmark that referenced this pull request Jun 3, 2021
…le#1051)

Inspired by the original implementation by Hai Huang @haih-g
from google#1105.

The original implementation had design deficiencies that
weren't really addressable without redesign, so it was reverted.

In essence, the original implementation consisted of two separateable parts:
* reducing the amount time each repetition is run for, and symmetrically increasing repetition count
* running the repetitions in random order

While it worked fine for the usual case, it broke down when user would specify repetitions
(it would completely ignore that request), or specified per-repetition min time (while it would
still adjust the repetition count, it would not adjust the per-repetition time,
leading to much greater run times)

Here, like i was originally suggesting in the original review, i'm separating the features,
and only dealing with a single one - running repetitions in random order.

Now that the runs/repetitions are no longer in-order, the tooling may wish to sort the output,
and indeed `compare.py` has been updated to do that: google#1168.
LebedevRI added a commit to LebedevRI/benchmark that referenced this pull request Jun 3, 2021
…le#1051)

Inspired by the original implementation by Hai Huang @haih-g
from google#1105.

The original implementation had design deficiencies that
weren't really addressable without redesign, so it was reverted.

In essence, the original implementation consisted of two separateable parts:
* reducing the amount time each repetition is run for, and symmetrically increasing repetition count
* running the repetitions in random order

While it worked fine for the usual case, it broke down when user would specify repetitions
(it would completely ignore that request), or specified per-repetition min time (while it would
still adjust the repetition count, it would not adjust the per-repetition time,
leading to much greater run times)

Here, like i was originally suggesting in the original review, i'm separating the features,
and only dealing with a single one - running repetitions in random order.

Now that the runs/repetitions are no longer in-order, the tooling may wish to sort the output,
and indeed `compare.py` has been updated to do that: google#1168.
LebedevRI added a commit that referenced this pull request Jun 3, 2021
… (#1163)

Inspired by the original implementation by Hai Huang @haih-g
from #1105.

The original implementation had design deficiencies that
weren't really addressable without redesign, so it was reverted.

In essence, the original implementation consisted of two separateable parts:
* reducing the amount time each repetition is run for, and symmetrically increasing repetition count
* running the repetitions in random order

While it worked fine for the usual case, it broke down when user would specify repetitions
(it would completely ignore that request), or specified per-repetition min time (while it would
still adjust the repetition count, it would not adjust the per-repetition time,
leading to much greater run times)

Here, like i was originally suggesting in the original review, i'm separating the features,
and only dealing with a single one - running repetitions in random order.

Now that the runs/repetitions are no longer in-order, the tooling may wish to sort the output,
and indeed `compare.py` has been updated to do that: #1168.
vincenzopalazzo pushed a commit to vincenzopalazzo/benchmark that referenced this pull request Feb 8, 2022
Currently, the tooling just keeps the whatever benchmark order
that was present, and this is fine nowadays, but once the benchmarks
will be optionally run interleaved, that will be rather suboptimal.

So, now that i have introduced family index and per-family instance index,
we can define an order for the benchmarks, and sort them accordingly.

There is a caveat with aggregates, we assume that they are in-order,
and hopefully we won't mess that order up..
vincenzopalazzo pushed a commit to vincenzopalazzo/benchmark that referenced this pull request Feb 8, 2022
…le#1051) (google#1163)

Inspired by the original implementation by Hai Huang @haih-g
from google#1105.

The original implementation had design deficiencies that
weren't really addressable without redesign, so it was reverted.

In essence, the original implementation consisted of two separateable parts:
* reducing the amount time each repetition is run for, and symmetrically increasing repetition count
* running the repetitions in random order

While it worked fine for the usual case, it broke down when user would specify repetitions
(it would completely ignore that request), or specified per-repetition min time (while it would
still adjust the repetition count, it would not adjust the per-repetition time,
leading to much greater run times)

Here, like i was originally suggesting in the original review, i'm separating the features,
and only dealing with a single one - running repetitions in random order.

Now that the runs/repetitions are no longer in-order, the tooling may wish to sort the output,
and indeed `compare.py` has been updated to do that: google#1168.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants