-
-
Notifications
You must be signed in to change notification settings - Fork 356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prepare commands that differ, and only run once, depending on the command being timed? #216
Comments
Thank you for the feedback. That sounds like a useful feature to have. Did you see the
We could change that and allow hyperfine --prepare "git checkout branch1" "./build.sh" \
--prepare "git checkout branch2" "./build.sh" What do you think?
If the hyperfine --warmup 1 "git checkout branch1; build.sh" "git checkout branch2; build.sh" The only problem is that it will perform one unnecessary build for each branch. |
Another thing that comes really close is the Imagine for a moment that your branches would actually be called hyperfine \
--parameter-scan number 1 2 \
--prepare "git checkout branch{number}" \
"build.sh" So another option would be to add a new option that would allow non-numeric parameter runs. Maybe hyperfine \
--parameter-list branch_name master,feature1,bugfix,my-test-feature \
--prepare "git checkout {branch_name}" \
"build.sh" |
I will add another way we can address a lot of issues regarding parameterized commands by splitting the benchmarking and reporting in two separate commands. For eg. you can invoke
and then a report sub-command can print the pretty comparison stats and everything from that file.
This way we can loop over parameters in bash easily. If |
These all sound like great solutions. Today our script generates two json files with something along the lines of what @piyushrungta25 showed in that example, and I'm manually doing statistical calculations to get the difference in the means/std deviations, so some way to report on either a session file, or on two or more independent exports would be nice (and that seems valuable for comparing results over longer time periods, independent of my initial request, which is something we're also currently planning to do). However It'd be amazing to have one of the other options, either We have some git hooks right now that run on every checkout, and those take some time to run, so Thank you for the responses and reception to my request! We have our workarounds for now, but if we ever do this analysis in the future, one or more of these options would definitely be welcome! |
I would like to take this issue if no one is currently working on this. I think this issue have 2 feature request:-
I would like to implement out feature 1 by supporting multiple prepare option and second feature by addition of new option maybe |
I'd rather not follow that path. It would definitely be very powerful, but I would really like |
This is now supported via #218 by @iamsauravsharma. However, the |
Released in v1.8.0. |
Wonderful, thank you so much for implementing this!
…--
Kevin Lundberg
On Oct 13, 2019, at 10:59 AM, David Peter ***@***.***> wrote:
Released in v1.8.0.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I have a specific request: We're using hyperfine to measure the difference in compile times between two branches of our codebase. Right now I have two separate hyperfine invocations in a script, and before each of them I check out the specific branch in git that I want to measure compile times for.
I noticed today that hyperfine supports timing and comparing multiple commands in a single invocation and that it provides stats comparing them which is great! But in order for us to use that, we'd need to do a git checkout before each run of the build command, (something like `hyperfine "git checkout branch1; build.sh" "git checkout branch2; build.sh") and I don't want that checkout time to be included in our metrics. Would it be possible to have some sort of global prepare statement per-command that's being run that runs only once, and is different for each command being tested? I imagine that would be a new feature and a nontrivial one at that.
Or would you recommend running a few warmup runs with the checkout in the command to benchmark, or something like that?
Thanks!
The text was updated successfully, but these errors were encountered: