Conversation
Currently asv refuses to run with multiple parameters
|
@VeckoTheGecko - you might give this a try. On this branch, do Once in the pixi shell, try, At the moment, |
|
Right now this is set up to use all packages (including parcels) as defined by the pixi environment. |
|
Looks like benchmarks need to be prefixed with specific names, e.g. |
Pixi environment now only provides the necessary packages for the benchmarking environment. asv will use rattler to install parcels and its dependencies in another environment
…hmarks The changes provided here allow for the parcels_bencharks/benchmark_setup.py module to be found within asv benchmarks. This makes it easy to handle downloading the necessary datasets for benchmarks. I've added a helper function in the moi curvilinear benchmarks to load the xarray dataset; storing the dataset as an object attribute can cause contamination between benchmarks that I want to avoid. Each benchmark now loads a fresh dataset from disk at the beginning by calling _load_ds(...)
What about removing the CLI but adding a DATA_DIR env var for override? Without this, I have to clean out my |
|
@willirath - makes sense. Let me see what I could do to tighten up the connection here wrt to the data home |
Co-authored-by: Willi Rath <willirath@users.noreply.github.com>
Co-authored-by: Willi Rath <willirath@users.noreply.github.com>
Co-authored-by: Willi Rath <willirath@users.noreply.github.com>
Co-authored-by: Willi Rath <willirath@users.noreply.github.com>
Co-authored-by: Willi Rath <willirath@users.noreply.github.com>
Co-authored-by: Willi Rath <willirath@users.noreply.github.com>
Until we determine how we want to manage centrally storing benchmark results, we'll keep these out of the repository for now
|
@VeckoTheGecko - I've removed the |
|
@willirath - I think I've addressed all of your comments. You can now use the |
willirath
left a comment
There was a problem hiding this comment.
Two minor things: leftover argparse logic in main block and missing Path on env var.
Co-authored-by: Willi Rath <willirath@users.noreply.github.com>
Co-authored-by: Willi Rath <willirath@users.noreply.github.com>
Co-authored-by: Willi Rath <willirath@users.noreply.github.com>
|
Just tested, both, with / without overriding the cache dir on our Uni Kiel HPC and on my Macbook. I think it's good to be merged. |
erikvansebille
left a comment
There was a problem hiding this comment.
Looks good! Nice push forward. And installation was a breeze 👍
A few comments below
|
|
||
| pset.execute(parcels.kernels.AdvectionEE, runtime=runtime, dt=dt, verbose_progress=False) | ||
|
|
||
| def peakmem_pset_execute_3d(self,interpolator,chunk,npart): |
There was a problem hiding this comment.
The body of this function appears identical(?) to the body of the time_pset_execute_3d()? Can we reduce code duplication by organising this more smartly? Why do we need two functions?
There was a problem hiding this comment.
They are identical, but this is the way ASV works. If we want a benchmark for measuring peak memory the function has to start with peakmem_. If we want runtime, it has to start with time_
There was a problem hiding this comment.
OK, but can't we then have these two functions that then call another function _execute() or so, so that the execute remains the same. Also important to avoid one benchmark changing but the other not?
| @@ -0,0 +1 @@ | |||
| # parcels_benchmarks/benchmark_utils | |||
There was a problem hiding this comment.
Intentionally commented out?
| __pycache__ | ||
| build/ | ||
| parcels/ | ||
| .asv/ |
There was a problem hiding this comment.
I understand @VeckoTheGecko comment to ignore the individual run outputs, but how/where are they stored now then?
There was a problem hiding this comment.
When you run, the results are stored under .asv/ . I commented in #24 to discuss what we should retain here in version control as a follow up :)
|
Oops, I now only realise the PR was merged already. Well; perhaps some of my comments are useful for other rounds of enhancements? |
Yes indeed |
This PR transitions all benchmarking to use ASV.