You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the test/performance directory there is currently only one test. I am thinking we just start with whatever that test is on a given number of quartz nodes, then go from there, but I go some other questions:
Are we going to be comparing the solutions for correctness? If so where are we storing that tally data (which might be massive)
What are the jobs (c5g7 pulsed sphere)
What are the job parameters we are shooting for (machine, # of nodes, # MPI ranks, Numba v Python mode, etc.)
Where will we be storing the job runtimes (I say another GitHub repo in the CEMeNT)
The text was updated successfully, but these errors were encountered:
I think we don't need to check for correctness, as that is covered by regression and verification tests. But we still need to run problems with huge tally sizes.
For the runtime record, I think we can store it as a text file in the repo. We will also add a Python script that generates plots of the runtime record. As I think more about it, we may want to exclude the performance test from automatic testing. Instead, we should run it manually and commit new runtime records as a PR as needed.
I'll make lists of the test problems and the performance metrics and post them here (going to refer to our "metrics of victory").
In the
test/performance
directory there is currently only one test. I am thinking we just start with whatever that test is on a given number of quartz nodes, then go from there, but I go some other questions:The text was updated successfully, but these errors were encountered: