-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarking Suite #1712
Labels
Comments
The more I look at projects in this domain, the more I see |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Establishing and documenting a standard benchmark suite for Parcels would bring an active focus to performance within the Parcels project.
This benchmarking suite could include whole simulation tests, as well as tests that target specific parts of the codebase (e.g., particle file writing, as would be important in #1661). Note that tests relating to MPI should be realistic whole simulation tests as the loading of fieldsets and locations of particles have a significant impact on performance.
Ideally:
Have a suite of tests that can run on CI (when requested via a change in PR label) testing various core parts of the codebase, saving and uploading a waterfall report for the execution time and memory use for each function (such as those generated by sciagraph as well as IO.
Known tools:
asv
on Conda (used in numpy and scipy) (this is used by the xarray team)pytest-benchmark
on CondaThe benchmarks should be run on a machine with consistent resources (large simulations can be run on Lorenz at IMAU which has significant resources and access to hydrodynamic forcing for realistic simulations).
Related:
The text was updated successfully, but these errors were encountered: