You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If the enkfgdaseobs job is run with more processors than (MPI tasks) x (threads), data will be left on the floor and result in an incomplete analysis. Kludges have been placed for S4 and Jet, but new systems with different core/node counts will need similar kludges.
What should have happened?
The enkfgdaseobs job should be able to collect all necessary data regardless of how many cores are used.
What machines are impacted?
All or N/A
Steps to reproduce
Setup a cycled experiment and modify config.resources to use a different number of PEs for the eobs job
Run the enkfgdaseobs job and plot the resulting ingested data points
I'm not sure if this is a scripting change in the global-workflow or a code change in the GSI. But once it is fixed, the config.resources file should be simplified to use the same number of processes across all systems.
The text was updated successfully, but these errors were encountered:
# if requested, link GSI diagnostic file directories for use later
if [ ${GENDIAG}="YES" ] ;then
if [ ${lrun_subdirs}=".true." ] ;then
if [ -d${DIAG_DIR} ];then
rm -rf ${DIAG_DIR}
fi
npe_m1="$((${npe_gsi}-1))"
forpein$(seq 0 ${npe_m1});do
pedir="dir."$(printf %04i ${pe})
mkdir -p ${DIAG_DIR}/${pedir}
${NLN}${DIAG_DIR}/${pedir}${pedir}
done
else
err_exit "FATAL ERROR: lrun_subdirs must be true. lrun_subdirs=${lrun_subdirs}"
fi
fi
Looping over npe_gsi-1 will not create all of the links necessary if npe does not equal ncpus=(npe_node*nodes). To fix this, the loop should be changed to loop over npe_node * nnodes - 1.
What is wrong?
If the enkfgdaseobs job is run with more processors than (MPI tasks) x (threads), data will be left on the floor and result in an incomplete analysis. Kludges have been placed for S4 and Jet, but new systems with different core/node counts will need similar kludges.
What should have happened?
The
enkfgdaseobs
job should be able to collect all necessary data regardless of how many cores are used.What machines are impacted?
All or N/A
Steps to reproduce
An example pair of plots from @CoryMartin-NOAA is below:
Additional information
This was first captured in #154.
Do you have a proposed solution?
I'm not sure if this is a scripting change in the global-workflow or a code change in the GSI. But once it is fixed, the config.resources file should be simplified to use the same number of processes across all systems.
The text was updated successfully, but these errors were encountered: