Skip to content

Add a way to plug in custom benchmarks. #89

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ericsnowcurrently opened this issue Mar 18, 2021 · 4 comments
Closed

Add a way to plug in custom benchmarks. #89

ericsnowcurrently opened this issue Mar 18, 2021 · 4 comments

Comments

@ericsnowcurrently
Copy link
Member

Currently you have to modify this repo if you want to run a custom benchmark. It would be nice to have a mechanism by which a custom benchmark could be plugged in externally.

(This isn't a priority.)

@vstinner
Copy link
Member

pyperformance is just a set of benchmarks written with pyperf and runs them sequentially using --append option to add all results into an unique JSON file. Most of the work to manage benchmark results is done by pyperf. See pyperf --append option:
https://pyperf.readthedocs.io/en/latest/runner.html#json-output

You can run manually your custom benchmark with --append. It's common that I run benchmarks manually:

# Benchmark Python 3.9
$ python3.9 bm_telco.py --append py39.json
.....................
telco: Mean +- std dev: 8.16 ms +- 0.39 ms

$ python3.9 bm_unpack_sequence.py --append py39.json
.....................
unpack_sequence: Mean +- std dev: 80.6 ns +- 3.2 ns

$ python3.9 -m pyperf show py39.json 
telco: Mean +- std dev: 8.16 ms +- 0.39 ms
unpack_sequence: Mean +- std dev: 80.6 ns +- 3.2 ns


# Benchmark Python 3.6
$ python3.6 bm_telco.py --append py36.json
.....................
telco: Mean +- std dev: 10.2 ms +- 0.3 ms

$ python3.6 bm_unpack_sequence.py --append py36.json
.....................

$ python3 -m pyperf compare_to py36.json py39.json  
telco: Mean +- std dev: [py36] 10.2 ms +- 0.3 ms -> [py39] 8.16 ms +- 0.39 ms: 1.26x faster
unpack_sequence: Mean +- std dev: [py36] 72.2 ns +- 1.8 ns -> [py39] 80.6 ns +- 3.2 ns: 1.12x slower

Geometric mean: 1.06x faster

$ python3 -m pyperf compare_to py36.json py39.json  --table
+-----------------+---------+-----------------------+
| Benchmark       | py36    | py39                  |
+=================+=========+=======================+
| telco           | 10.2 ms | 8.16 ms: 1.26x faster |
+-----------------+---------+-----------------------+
| unpack_sequence | 72.2 ns | 80.6 ns: 1.12x slower |
+-----------------+---------+-----------------------+
| Geometric mean  | (ref)   | 1.06x faster          |
+-----------------+---------+-----------------------+

But pyperformance has some features to make the whole task simpler and more convenient. For example, it can build a Python for you and run benchmarks on the built Python. It also creates a virtual environment for you. Well, creating a virtual environment became simpler these days, it's just two commands: "python3 -m venv env" and "env/bin/python -m pip install -r requirements.txt".

@ericsnowcurrently
Copy link
Member Author

Good point about running custom benchmarks directly rather than adding them to pyperformance, especially for my needs. The only catch is if I want to use the "compile" command (which I do).

@vstinner
Copy link
Member

I'm not against the ability to load benchmarks from other places. I just mentioned that currently, they are ways to to run benchmarks manually without losing too many pyperformance features.

@ericsnowcurrently
Copy link
Member Author

This was done in #109.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants