-
-
Notifications
You must be signed in to change notification settings - Fork 184
Add a way to plug in custom benchmarks. #89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
pyperformance is just a set of benchmarks written with pyperf and runs them sequentially using --append option to add all results into an unique JSON file. Most of the work to manage benchmark results is done by pyperf. See pyperf --append option: You can run manually your custom benchmark with --append. It's common that I run benchmarks manually:
But pyperformance has some features to make the whole task simpler and more convenient. For example, it can build a Python for you and run benchmarks on the built Python. It also creates a virtual environment for you. Well, creating a virtual environment became simpler these days, it's just two commands: "python3 -m venv env" and "env/bin/python -m pip install -r requirements.txt". |
Good point about running custom benchmarks directly rather than adding them to pyperformance, especially for my needs. The only catch is if I want to use the "compile" command (which I do). |
I'm not against the ability to load benchmarks from other places. I just mentioned that currently, they are ways to to run benchmarks manually without losing too many pyperformance features. |
This was done in #109. |
Currently you have to modify this repo if you want to run a custom benchmark. It would be nice to have a mechanism by which a custom benchmark could be plugged in externally.
(This isn't a priority.)
The text was updated successfully, but these errors were encountered: