-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FR] Lightweight way to pause/resume timer in benchmarks #1087
Comments
So how does it work? |
It works just like the normal pause/resume functions, but the timings recorded are subtracted from the normal timings when the report is generated. You can see the diff here: master...kgorking:lightweight_timer_suspension |
I see, so no magic.The effect is really surprising to me. |
The pause/resume functions modify the values of the timer used to calculate the number of iterations that should be run. In my example from above, the benchmarking code thinks that an iteration takes 0.810us when it is in reality taking 2372us, and then calculates that 793297 iterations should be run to hit a certain time threshold. The benchmark runner thinks it will take |
Hm, so with BeginIgnoreTiming/EndIgnoreTiming, if the time spent with paused timing increases, the iteration count will decrease? |
The iteration count will be the same as if there was no pause/resume |
I see. Let me explain my take on this. Let's suppose we have a benchmark:
We want to measure how much time is spent in Now, let's suppose we have a benchmark:
We want to measure how much time is spent in I think it is really important to notice that the less iterations we do, So i, personally, find this approach to be fundamentally wrong. |
I don't think it's fundamentally wrong, and it is in fact how we used to do the pause/resume when the library was first written. However, your (@LebedevRI) point stands, which is that we want to take into account the pause when calculating iterations to ensure that the bits we are measuring get enough coverage. Now ideally we wouldn't use the iteration growth stuff and instead would run while monitoring stddev and run until So i'm happy that you've found a way to make your benchmarks run for less time, but i don't think your results are as statistically useful as they could be. #1051 is a related bit of work by which we can reduce the variance of benchmarks and balance execution cost. |
I'm guessing what we'd want to monitor is relative standard deviation, |
Practically, i think it would have to, yes. it's one of the many reasons why i haven't pursued it. |
Since we now clearly have a second potential use-case, should i try to resurrect that patch? |
Using PauseTiming/ResumeTiming can make some benchmarks take a very long time to complete. Given the following example, if the call to A() takes longer than B(), then the runtime of the benchmarks can increase to ludicrous lengths.
To fix this, I have implemented a pair of functions (BeginIgnoreTiming/EndIgnoreTiming) that can be used to ignore the timing cost of code executed between them. It keeps timing info around for when CreateRunReport() is called, where the duration ignored is subtracted from the cpu- and real-time timings, so it only affects the printed timing values.
This allows benchmarks to run as if there were no calls to pause the timings, while getting the timings of the benchmark I am actually interested in (B() in the above example).
I have included the results of some runs from a personal project of mine. The benchmark is of a similar format to the A/B example from above.
The text was updated successfully, but these errors were encountered: