Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow calculating geometric mean of groups of benchmarks based on tags #208

Open
mdboom opened this issue May 24, 2022 · 3 comments
Open

Comments

@mdboom
Copy link
Contributor

mdboom commented May 24, 2022

[Moved from https://github.com/faster-cpython/ideas/discussions/395]

It's becoming obvious that:

  • The pyperformance suite needs more benchmarks that are more similar to real-world workloads, and we should lean into optimizing for these and using these to report progress.
  • Microbenchmarks of a particular feature are also useful and belong in the benchmark suite, but we shouldn't over-optimize for them or use them as a (misleading) indicator of overall progress.

It seems that one way to address this would be to lean into "tags" more in the pyperformance/pyperf ecosystem. pyperformance already allows for tags in each benchmark's pyproject.yaml.

I propose we:

  1. Output the tags for each benchmark in the benchmark results in the metadata dictionary.
  2. pyperf compare_to would then calculate the geometric mean for each subset of benchmarks for each tag found in the results, as well as "all" benchmarks (existing behavior). This could be behind a flag if backward compatibility matters.

Alternatives:

We could instead use the nested benchmark heirarchy, rather than tags. Personally, I think tags is easier to understand and more flexible (a benchmark could be associated with multiple tags).

@ericsnowcurrently
Copy link
Member

+1

IIRC, my intention with tags was for them to be an extension to benchmark groups in the manifest. However, I ran out of time to take it all the way.

(I also considered doing away with groups and instead having a different manifest per group. However, that ends up rather clunky and less user-friendly.)

@ericsnowcurrently
Copy link
Member

Another thing we could do with tags/groups is split up a results file based on them.

mdboom added a commit to mdboom/pyperf that referenced this issue May 24, 2022
Addresses python/pyperformance#208

This reports geometric mean organized by the tag(s) assigned to each benchmark.
This will allow us to include benchmarks in the pyperformance suite that we
don't necessarily want to include in "one big overall number" to represent progress.
mdboom added a commit to mdboom/pyperf that referenced this issue May 25, 2022
Addresses python/pyperformance#208

This reports geometric mean organized by the tag(s) assigned to each benchmark.
This will allow us to include benchmarks in the pyperformance suite that we
don't necessarily want to include in "one big overall number" to represent progress.
vstinner pushed a commit to psf/pyperf that referenced this issue Jun 16, 2022
This reports geometric mean organized by the tag(s) assigned to each benchmark.
This will allow us to include benchmarks in the pyperformance suite that we
don't necessarily want to include in "one big overall number" to represent progress.

Addresses python/pyperformance#208
@ericsnowcurrently
Copy link
Member

Is this done now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants