-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose a basic API for accessing benchmark results #26
Conversation
… and pretty printing are defined. The benchmark macro now returns a BenchmarkResults object.
@@ -1,10 +1,24 @@ | |||
module Benchmarks | |||
export @benchmark | |||
export @benchmark, | |||
BenchmarkResults, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer not to export so many names. I'm trying to do my part to remove the culture of using
pulling in a ton of names.
Generally think this is a good idea. Suggested a few design changes I'd like to see and then I'd merge this. Thoughts, @mbauman? |
To clarify one big design principle, I want the |
The basic idea here is really nice; I was having some difficulty coming up with a good way to take the output of |
It would. I also wanted to add a |
This makes a lot of sense! Now that I have this direction in mind, I'd like to change the approach the PR takes. Instead of providing a I'll push a different branch to my fork that tries out the above, and link it here for comparison. Then we can decide which direction this PR should ultimately take. |
Branch for the above approach can now be found here. In addition to the changes listed above, this branch:
Let me know what you guys think. If the |
I don't feel strongly here. This part of the project is neither my wheelhouse nor interest, so I'll happily let you guys take charge. I'll just say that I agree that it makes sense to return the raw results object, and generate some summary statistics upon display. I think that will be a fairly powerful paradigm that could eventually allow robust comparisons between two benchmark runs. |
This PR seemed stale, so I closed it. FYI, I've started a new branch on my fork of this package which I'm going to be actively developing for the next week or so, with the goal of making it robust for usage in BenchmarkTrackers.jl (to make progress on JuliaLang/julia#13893). After enough battle-testing, I'd be willing to merge it back in here, or better yet, move development to a package under a Julia organization (JuliaCI maybe?). Just food for thought. |
I've been working on this on my own recently. I"ll check in on your fork when I've finished doing the work I wanted to do on my end. |
PR summary
Changes the name of
Results
toRawResults
Adds a new type,
BenchmarkResults
:A bunch of exported accessor functions are defined on this type, constituting a basic API for retrieving benchmark information.
@benchmark f(x)
now returns aBenchmarkResults
object.Pretty printing is now defined on
BenchmarkResults
instead ofSummaryStatistics
Very dumb tests are defined that basically just ensure that the API functions exist
Things that I wasn't 100% sure about
include
order requirements. Following this assumption, I named the new files05_api.jl
.