-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Smoke test should abort early if the run fails #608
Comments
Thanks for catching this @phil-blain. What you propose makes sense. I can include this fix in my next PR unless you create a PR first. |
You can go ahead. Might be worth it to check the other tests at the same time. From a quick look the |
apcraig
added a commit
to apcraig/CICE
that referenced
this issue
Jun 19, 2021
- Fix bugs in history/restart frequency associated with new calendar (CICE-Consortium#589) - Define frequency in absolute terms relative to 0000-01-01-00000 and document (CICE-Consortium#589) - Update set_nml.histall to include hourly output (CICE-Consortium#589) - Update test scripts to cleanly abort if run fails where possible (CICE-Consortium#608) - Update decomp test so it's rerunable, remove restart at start of run (CICE-Consortium#601) - Add ability to do bfbcomp tests with additional options set on command line (CICE-Consortium#569) - Update documentation of calendar frequency computation, calendar types, and closed boundaries (CICE-Consortium#541) - Add optional doabort flag to abort_ice to control whether the method aborts. This is useful for testing and code coverage statistics, although doabort=.false. will not call the actual abort method, but we can test the interfaces and rest of the code.
16 tasks
apcraig
added a commit
that referenced
this issue
Jul 2, 2021
* Fix history/restart frequency bugs and update scripts - Fix bugs in history/restart frequency associated with new calendar (#589) - Update set_nml.histall to include hourly output (#589) - Update test scripts to cleanly abort if run fails where possible (#608) - Update decomp test so it's rerunable, remove restart at start of run (#601) - Add ability to do bfbcomp tests with additional options set on command line (#569) - Update documentation of calendar frequency computation, calendar types, and closed boundaries (#541) - Add optional doabort flag to abort_ice to control whether the method aborts. This is useful for testing and code coverage statistics, although doabort=.false. will not call the actual abort method, but we can test the interfaces and rest of the code. - Add histfreq_base and dumpfreq_base ('init' or 'zero') to specify reference data for history and restart output. Defaults are 'zero' and 'init' respectively for hist and dump. Setting histfreq_base to 'zero' allows for consistent output across multiple runs. Setting dumpfreq_base to 'init' allows the standard testing which requires restarts be written, for example, 5 days after the start of the run. - Remove extra abort calls in bcstchk and sumchk on runs that complete fine but don't pass checks. These aborts should never have been there. - Update documentation. - Clean up some of the unit tests to better support regression testing - modify initial/restart implementation - restart namelist is deprecated, now computed internally - modify initial/continue init checks and set restart and use_restart_time as needed - create compute_relative_elapsed method in ice_calendar to improve code reuse - update documentation with regard to initial/continue modes - Set default use_restart_time to false
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The
smoke
test does not exit with failure when the CICE run fails, it simply write theFAIL
result to thetest_output
file:CICE/configuration/scripts/tests/test_smoke.script
Lines 6 to 7 in 1c6c225
[...]
CICE/configuration/scripts/tests/test_smoke.script
Lines 23 to 29 in 1c6c225
In contrast, the
restart
test doesexit 99
:CICE/configuration/scripts/tests/test_restart.script
Lines 10 to 22 in 1c6c225
This makes more sense in my opinion. If the initial run fails it makes no sense to go on and try to do the baseline generation, baseline comparing and BFB compare step, no ?
This can lead to misleading "missing data" results when in fact it is the data for the current test that is missing (because the run failed) and not the data for the run which we are comparing against.
The text was updated successfully, but these errors were encountered: