Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose a basic API for accessing benchmark results #26

Closed
wants to merge 3 commits into from
Closed

Expose a basic API for accessing benchmark results #26

wants to merge 3 commits into from

Conversation

jrevels
Copy link

@jrevels jrevels commented Oct 1, 2015

PR summary

  • Changes the name of Results to RawResults

  • Adds a new type, BenchmarkResults:

    immutable BenchmarkResults
        raw::RawResults
        stats::SummaryStatistics
        BenchmarkResults(raw::RawResults) = new(raw, SummaryStatistics(raw))
    end

    A bunch of exported accessor functions are defined on this type, constituting a basic API for retrieving benchmark information.

  • @benchmark f(x) now returns a BenchmarkResults object.

  • Pretty printing is now defined on BenchmarkResults instead of SummaryStatistics

  • Very dumb tests are defined that basically just ensure that the API functions exist

Things that I wasn't 100% sure about

  • File names. I'm assuming the numbers in front of the file names simply serve as a reminder of include order requirements. Following this assumption, I named the new files 05_api.jl.
  • Tests. The API tests I added are really primitive, any suggestions for improvement are welcome.
  • Comment verbosity, exported functionality, type names, function names, life, the universe, and everything

… and pretty printing are defined. The benchmark macro now returns a BenchmarkResults object.
@@ -1,10 +1,24 @@
module Benchmarks
export @benchmark
export @benchmark,
BenchmarkResults,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer not to export so many names. I'm trying to do my part to remove the culture of using pulling in a ton of names.

@johnmyleswhite
Copy link
Owner

Generally think this is a good idea. Suggested a few design changes I'd like to see and then I'd merge this. Thoughts, @mbauman?

@johnmyleswhite
Copy link
Owner

To clarify one big design principle, I want the @benchmark to only handle data collection, because I want it to be possible to costlessly use different analysis strategies. I'm ok with show forcing an analysis to take place, but generally what data collection and data analysis to be orthogonal steps that you request independently.

@staticfloat
Copy link
Collaborator

The basic idea here is really nice; I was having some difficulty coming up with a good way to take the output of writecsv() and recreate the statistics given via a SummaryStatistics output. It's hard for me to tell, would this change allow that?

@johnmyleswhite
Copy link
Owner

It would. I also wanted to add a writecsv method that just dumps SummaryStatistics.

@jrevels
Copy link
Author

jrevels commented Oct 2, 2015

To clarify one big design principle, I want the @benchmark to only handle data collection, because I want it to be possible to costlessly use different analysis strategies. I'm ok with show forcing an analysis to take place, but generally what data collection and data analysis to be orthogonal steps that you request independently.

This makes a lot of sense! Now that I have this direction in mind, I'd like to change the approach the PR takes.

Instead of providing a BenchmarkResults object, we could provide an abstract type AbstractAnalysis on which an interface for retrieving stats is defined. Then we'd define SummaryStatistics <: AbstractAnalysis, include RawResults as a field of SummaryStatistics, and provide the relevant API methods. I think this strategy might be more flexible/extensible with regards to adding new "analytics types" in the future.

I'll push a different branch to my fork that tries out the above, and link it here for comparison. Then we can decide which direction this PR should ultimately take.

@jrevels
Copy link
Author

jrevels commented Oct 2, 2015

Instead of providing a BenchmarkResults object, we could provide an abstract type AbstractAnalysis on which an interface for retrieving stats is defined. Then we'd define SummaryStatistics <: AbstractAnalysis, include RawResults as a field of SummaryStatistics, and provide the relevant API methods. I think this strategy might be more flexible/extensible with regards to adding new "analytics types" in the future.

I'll push a different branch to my fork that tries out the above, and link it here for comparison. Then we can decide which direction this PR should ultimately take.

Branch for the above approach can now be found here.

In addition to the changes listed above, this branch:

  • Incorporates all of @johnmyleswhite's suggestions
  • Uses the name ExecutionResults instead of RawResults - I think the former is more descriptive.
  • Moves pretty printing utility functions to their own file

Let me know what you guys think. If the api2 branch seems like the better option, I can close this PR and open a new one, or do some rebasing to mangle the api2 commits into this PR's branch. If, instead, this PR's original approach is preferred, I can re-implement @johnmyleswhite's suggestions on this branch.

@mbauman
Copy link
Collaborator

mbauman commented Oct 3, 2015

I don't feel strongly here. This part of the project is neither my wheelhouse nor interest, so I'll happily let you guys take charge. I'll just say that I agree that it makes sense to return the raw results object, and generate some summary statistics upon display. I think that will be a fairly powerful paradigm that could eventually allow robust comparisons between two benchmark runs.

@jrevels jrevels closed this Dec 7, 2015
@jrevels jrevels deleted the jr/api branch December 7, 2015 22:24
@jrevels
Copy link
Author

jrevels commented Dec 7, 2015

This PR seemed stale, so I closed it.

FYI, I've started a new branch on my fork of this package which I'm going to be actively developing for the next week or so, with the goal of making it robust for usage in BenchmarkTrackers.jl (to make progress on JuliaLang/julia#13893). After enough battle-testing, I'd be willing to merge it back in here, or better yet, move development to a package under a Julia organization (JuliaCI maybe?). Just food for thought.

@johnmyleswhite
Copy link
Owner

I've been working on this on my own recently. I"ll check in on your fork when I've finished doing the work I wanted to do on my end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants