-
Notifications
You must be signed in to change notification settings - Fork 13.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make it possible to stop running #[bench] functions multiple times #20142
Comments
If one tries to keep benchmarks in separate files and opts for |
For crypto-bench, we want to run And, that time, in particular the wall time, negatively affects my other repos, because generally CI services like Travis CI and Appveyor only allow one project to build at a time, for their free open source tiers. Further, even out side of CI infrastructure, we want contributors to run the benchmarks before sending PRs. But, it's not reasonable to expect them to wait through many minutes of perf tests before asking for a PR. Here are two ideas:
|
Never mind. I just learned that |
This was bugging me, so I implemented the first proposed option:
As this is my first real rust code, I imagine its pretty bad quality. A fist attempt is here: Craig-Macomber@adcb0fc I'll clean it up a bit, give it some more testing, then send in a pull request unless anyone offers any opinions before I get that far. |
Do not run outer setup part of benchmarks multiple times to fix issue 20142 Fix #20142 This is my first real rust code, so I expect the quality is quite bad. Please let me know in which ways it is horrible and I'll fix it. Previously the whole benchmark function was rerun many times, but with this change, only the callback passed to iter is rerun. This improves performances by saving benchmark startup time. The setup used to be called a minimum of 101 times, and now only runs once. I wasn't sure exactly what should be done for the case where iter is never called, so I left a FIXME for that: currently it does not error, and I added tests to cover that. I have left the algorithm and statistics unchanged: I don't like how the minimum number of runs is 301 (that's bad for very slow benchmarks) but I consider such changes out of scope for this fix.
Does anyone have an example on how to use this feature ? I'd like to run a benchmark only once. |
#[bench]
functions are run multiple times. Some benchmark functions might depend on some runtime initialisation which may fail. For example:Will print “Benchmarks on even hours are forbidden!” a lot of times. If one opts to not report set-up failure, the benchmark function just keeps processor hot for a while without actually benchmarking anything (and reporting 0ns/iter anyway).
Would it be sensible to (one of possible solutions):
#[bench]
function multiple times, since iteration is done by.iter
;#[bench]
again ifBencher::iter
is not called at least once;Bencher
to cancel all subsequent runs of the#[bench]
function?Or maybe my approach is incorrect?
The text was updated successfully, but these errors were encountered: