Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This defines a simple BenchmarkTools.jl suite based on BENCHMARKS.md for the actual benchmark code. Feel free to add other things we should be keeping track of or maybe some integration benchmarks.
This also uses AirspeedVelocity.jl for automatic CI output which parses the output of the benchmark results. Basically every pull request will get a comment like SymbolicML/DynamicExpressions.jl#94 (comment) with all of the performance info. It also measures the change in load time which is quite useful.
I find it nice for catching performance regressions in PRs. It is used in these packages: https://github.com/search?q=/Pkg.build%5C(%22AirspeedVelocity%22%5C)/+language:YAML&type=code&l=YAML
You can run this benchmark suite with:
# julia -e 'using Pkg; pkg"add AirspeedVelocity"; pkg"build AirspeedVelocity"' benchpkg
By default it will just compare
main
todirty
, but you can test over version history withwhich gives me the output:
@py/pydict/init
@py/pydict/pydel
julia/pydict/init
julia/pydict/pydel
time_to_load
View all the options with
benchpkg -h
.