-
-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow calculating geometric mean of groups of benchmarks based on tags #208
Comments
+1 IIRC, my intention with tags was for them to be an extension to benchmark groups in the manifest. However, I ran out of time to take it all the way. (I also considered doing away with groups and instead having a different manifest per group. However, that ends up rather clunky and less user-friendly.) |
Another thing we could do with tags/groups is split up a results file based on them. |
mdboom
added a commit
to mdboom/pyperf
that referenced
this issue
May 24, 2022
Addresses python/pyperformance#208 This reports geometric mean organized by the tag(s) assigned to each benchmark. This will allow us to include benchmarks in the pyperformance suite that we don't necessarily want to include in "one big overall number" to represent progress.
This was referenced May 24, 2022
mdboom
added a commit
to mdboom/pyperf
that referenced
this issue
May 25, 2022
Addresses python/pyperformance#208 This reports geometric mean organized by the tag(s) assigned to each benchmark. This will allow us to include benchmarks in the pyperformance suite that we don't necessarily want to include in "one big overall number" to represent progress.
vstinner
pushed a commit
to psf/pyperf
that referenced
this issue
Jun 16, 2022
This reports geometric mean organized by the tag(s) assigned to each benchmark. This will allow us to include benchmarks in the pyperformance suite that we don't necessarily want to include in "one big overall number" to represent progress. Addresses python/pyperformance#208
Is this done now? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
[Moved from https://github.com/faster-cpython/ideas/discussions/395]
It's becoming obvious that:
It seems that one way to address this would be to lean into "tags" more in the pyperformance/pyperf ecosystem. pyperformance already allows for tags in each benchmark's
pyproject.yaml
.I propose we:
metadata
dictionary.pyperf
compare_to
would then calculate the geometric mean for each subset of benchmarks for each tag found in the results, as well as "all" benchmarks (existing behavior). This could be behind a flag if backward compatibility matters.Alternatives:
We could instead use the nested benchmark heirarchy, rather than tags. Personally, I think tags is easier to understand and more flexible (a benchmark could be associated with multiple tags).
The text was updated successfully, but these errors were encountered: