-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug fix & safeguard updates #623
Comments
A check of Given this, it seems preferable to replace @CoryMartin-NOAA, @EdwardSafford-NOAA , and @andytangborn, would changing
to
cause any problems in monitoring code, workflow, UFO evaluation, etc? |
gsi.x lrun_subdirs option Subroutine PR #571 modified If mpi rank specific sub-directories exist, we don't want |
Work for this issue will be done in forked branch |
@EdwardSafford-NOAA , a closer look of
This prompted me to check source code in NOAA-EMC/GSI-Monitor. Given this, the do no harm approach is to replace To keep things simple @CoryMartin-NOAA and @andytangborn , I can also leave Thoughts? Comments? |
@RussTreadon-NOAA we don't use GSI for AOD for any operational code and don't plan to for GFS/GDAS, but may for RRFS-SD. But since it is self-contained, I see no reason why these can't be made consistent, it shouldn't break anything related to AOD assimilation. |
@CoryMartin-NOAA , thanks for the feedback. I agree. My preference is for consistency across netcdf diagnostic files. I replaced |
While working on g-w issue #1863, a failure was encountered in A short term fix is to replace the float |
Cycled tests on Hera, Orion, and WCOSS2 g-w issue #1863 documents the cycling of |
WCOSS2 ctests Run ctests on WCOSS2 (Cactus) with the following results
The global_3dvar test failed a timing check
A check of the
Rerun the global_3dvar test. This time the test passed.
Apparently there is considerable runtime variability on Cactus. This may be due to system load (filesystem, interconnect, etc). The netcdf_nmm_fv3 test failed due to a memory check
This is not a fatal fail. Orion ctests
The global_3dvar failure is due to a time check
A check of the
The rtma test also failed due to a timing check
Again we see the loproc_updat wall time exceeds the contrl
Rerun the rtma test. This time the test passed.
Apparently there is considerable runtime variability on Cactus. This may be due to system load (filesystem, interconnect, etc). Ctests on WCOSS2 and Orion show acceptable behavior. |
Hera ctests Run ctests on WCOSS2 (Cactus) with the following results
The hwrf_nmm_d2 failure is due to a time check
A check of the
This is not a fatal fail. |
**Description** This PR fixes two types of bugs discovered when cycling `gsi.x` and `enkf.x` with intel/2022 in the global workflow 1. modify variables written to netcdf diagnostic files by `gsi.x` to be consistent with codes which read netcdf diagnostic files 2. modify `lrun_subdirs=.true.` option of `gsi.x` to properly handle the case in which sub-directories already exist in the run directory Fixes #623 **Type of change** - [x] Bug fix (non-breaking change which fixes an issue) **How Has This Been Tested?** Ctests have been on Hera, Orion, and WCOSS2 (Cactus) with acceptable behavior. A global parallel covering the period 2021073106 through 2021080118 has been run on Hera, Orion, and WCOSS2 (Cactus). All global workflow jobs ran as expected. **Checklist** - [x] My code follows the style guidelines of this project - [x] I have performed a self-review of my own code - [x] New and existing tests pass with my changes
Two GSI-specific issues were identified while testing
develop
at 008c63c in the global-workflow.gsi.x
encodesTime
into netcdf radiance diagnostic file butenkf.x
expectsObs_Time
gsi.x
creates sub-directories. This update did not consider the case in which the directory to be created already exists.This issue is opened to address both of these points.
The text was updated successfully, but these errors were encountered: