-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for fixed generation of benchmark data. #124
Conversation
Codecov Report
@@ Coverage Diff @@
## main #124 +/- ##
=======================================
Coverage 98.85% 98.85%
=======================================
Files 14 14
Lines 699 699
=======================================
Hits 691 691
Misses 8 8 Continue to review full report at Codecov.
|
I've so far made minimal changes to the benchmarks themselves, but given this PR forces saving of sample data (rather than just creating a Python object), I'd be interested in @stephenworsley's thoughts on better optimisation via ASV's |
Take this as a suggestion, not a review. You could refactor into either a closure or a class that wraps the python executable rather than hard coding to a global constant. e.g.
or
You get the picture. |
If this needed to be more generic, sure. But that's engineering for something that I don't expect to happen: #124 (comment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally between us @pp-mo @stephenworsley @jamesp we think it is OK !
* add benchmark * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * handle cases where files don't exist * add benchmark * Proposal for fixed generation of benchmark data. (#124) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * post merge fixes * refactor _gridlike_mesh * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix benchmarks * turn on debugging * turn on debugging * fix data generation * fix benchmarks * try saving with unique address * Synthetic file generation - re-use files and ensure uniqueness. * try saving with unique address * fix benchmarks * fix nox * refactor long benchmarks * refactor long benchmarks * move DATA_GEN_PYTHON setup to nox * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * avoid python name "type" * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * changes to test infrastructure * lint fix * complete "type" refactor * fix benchmarks * toggle ci benchmark debugging off * make codecov more lenient * parameterise load/save benchmarks * address review comments * lint fix * review actions Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Martin Yeo <40734014+trexfeathers@users.noreply.github.com> Co-authored-by: Martin Yeo <martin.yeo@metoffice.gov.uk>
Have a read of
benchmarks/benchmarks/generate_data.py
- the docstrings provide detail on my thoughts for this.Summary: run generation scripts in an alternative environment that will therefore remain unchanged throughout the benchmark run.