Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UFS P7c memory issue #746

Closed
jiandewang opened this issue Aug 13, 2021 · 80 comments · Fixed by #756
Closed

UFS P7c memory issue #746

jiandewang opened this issue Aug 13, 2021 · 80 comments · Fixed by #756
Labels
bug Something isn't working

Comments

@jiandewang
Copy link
Collaborator

jiandewang commented Aug 13, 2021

Description

All UFS P7c runs (using workflow) failed at day 18 (using 300s for fv3) or day 13(using 225s for fv3), most likely due to memory leak.

To Reproduce:

git clone https://github.com/NOAA-EMC/global-workflow
cd global-workflow
git checkout feature/coupled-crow
git submodule update --init --recursive
sh checkout.sh -c
sh build_all.sh -c
sh link_fv3gfs.sh emc hera coupled

and then use the "prototype7" case file.

Additional context

Add any other context about the problem here.
Directly reference any issues or PRs in this or other repositories that this is related to, and describe how they are related. Example:

  • needs to be fixed also in noaa-emc/nems/issues/<issue_number>
  • needed for noaa-emc/fv3atm/pull/<pr_number>

Output

Screenshots
If applicable, drag and drop screenshots to help explain your problem.

output logs
one sample run log is saved at /scratch2/NCEPDEV/climate/Jiande.Wang/z-crow-flow/UFS-P7c/LOG/gfs.forecast.highres.log.0, error information is around line 297663.
_slurmstepd: error: Detected 1 oom-kill event(s) in StepId=21542673.0 cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
srun: error: h34m17: task 473: Out Of Memory
srun: launch/slurm: step_signal: Terminating StepId=21542673.0
slurmstepd: error: *** STEP 21542673.0 ON h33m12 CANCELLED AT 2021-08-11T23:57:15

PET file can be found at /scratch2/NCEPDEV/climate/Jiande.Wang/z-crow-flow/UFS-P7c/LOG/PET

paste the code here (if a short section of log)
@jiandewang jiandewang added the bug Something isn't working label Aug 13, 2021
@DomHeinzeller
Copy link
Contributor

@jiandewang in order to investigate this, we (@DeniseWorthen and @climbfuji) need a fully self-contained run directory that we can work with. That means an experiment directory with all input files, configuration files, and the job submission script. Can you provide this on hera, please? Thanks.

@jiandewang
Copy link
Collaborator Author

run dir which contains all input and configuration files: /scratch2/NCEPDEV/climate/Jiande.Wang/z-crow-flow/wrk-P7C/DATAROOT/R_20120101/2012010100/gfs/fcst.125814

run log: /scratch2/NCEPDEV/climate/Jiande.Wang/z-crow-flow/wrk-P7C/COMROOT/R_20120101/logs/2012010100/gfs.forecast.highres.log.0

this is through workflow thus there is no job_card (as in rt.sh) in run dir

@climbfuji
Copy link
Collaborator

run dir which contains all input and configuration files: /scratch2/NCEPDEV/climate/Jiande.Wang/z-crow-flow/wrk-P7C/DATAROOT/R_20120101/2012010100/gfs/fcst.125814

run log: /scratch2/NCEPDEV/climate/Jiande.Wang/z-crow-flow/wrk-P7C/COMROOT/R_20120101/logs/2012010100/gfs.forecast.highres.log.0

this is through workflow thus there is no job_card (as in rt.sh) in run dir

I will not be able to work on this unless I get a job submission script. I believe rocoto can dump it out using some verbose flag. @JessicaMeixner-NOAA knows.

@JessicaMeixner-NOAA
Copy link
Collaborator

So I printed out the profile memory from the p7b runs and the memory usage is less in the runs from workflow, so my thought was that maybe it's an environmental variable we just need to use in the workflow. I'm planning on setting a run directory and then using a job_card from the rt.sh (appropriately changed) to see if that will run. Eitherway I'll get a run directory w/job_card at the end of it.

@JessicaMeixner-NOAA
Copy link
Collaborator

I do know that you can get that job submission script dumped out but I haven't done that in forever, I'll see if I can dig out those instructions.

@climbfuji
Copy link
Collaborator

I do know that you can get that job submission script dumped out but I haven't done that in forever, I'll see if I can dig out those instructions.

Thanks, Jessica. I was hoping to be able to use Forge DDT and MAP to see what is going on. A self-contained run directory will be very helpful for this.

@yangfanglin
Copy link
Collaborator

yangfanglin commented Aug 13, 2021 via email

@JessicaMeixner-NOAA
Copy link
Collaborator

@yangfanglin I agree it's likely something in the workflow's HERA.env file that needs to be updated, in a log file for p7b output I found (/scratch1/NCEPDEV/stmp2/Jessica.Meixner/FV3_RT/rt_73915/cpld_bmark_wave_v16_p7b_35d_2013040100/err) :

  • echo 'Model started: ' Fri Aug 13 02:34:11 UTC 2021
  • export MPI_TYPE_DEPTH=20
  • MPI_TYPE_DEPTH=20
  • export OMP_STACKSIZE=512M
  • OMP_STACKSIZE=512M
  • export OMP_NUM_THREADS=2
  • OMP_NUM_THREADS=2
  • export ESMF_RUNTIME_COMPLIANCECHECK=OFF:depth=4
  • ESMF_RUNTIME_COMPLIANCECHECK=OFF:depth=4
  • export PSM_RANKS_PER_CONTEXT=4
  • PSM_RANKS_PER_CONTEXT=4
  • export PSM_SHAREDCONTEXTS=1
  • PSM_SHAREDCONTEXTS=1

but the OMP_STACKSIZE seems larger in the workflow, so? I'm working on setting up the canned case now. Hopefully will have it soon.

@JessicaMeixner-NOAA
Copy link
Collaborator

I've created a canned case on hera here:
/scratch2/NCEPDEV/climate/Jessica.Meixner/p7memissue/CannedCaseInput

My hope is that you can copy this directory to yours and then just "sbatch job_card" but it hasn't been tested yet, so not 100% sure this works yet. The job_card is from rt.sh -- which is what Rahul suggested earlier and would be testing along the same lines as Fanglin was suggesting with it perhaps being an environment variable issue. I'll update the issue after my test goes through.

@JessicaMeixner-NOAA
Copy link
Collaborator

The canned case is running for me now (the first time I submitted I had a module load error, but resubmission worked so?). Now we'll have to wait a couple of hours to see if the different environmental variables mean we don't get the same memory errors.

@DomHeinzeller
Copy link
Contributor

The canned case is running for me now (the first time I submitted I had a module load error, but resubmission worked so?). Now we'll have to wait a couple of hours to see if the different environmental variables mean we don't get the same memory errors.

Great progress! I'll wait for the outcome of your experiment before spending time on this.

@JessicaMeixner-NOAA
Copy link
Collaborator

See the output folder /scratch2/NCEPDEV/climate/Jessica.Meixner/p7memissue/Try02:

On day 18 in the err file we have:

 472: forrtl: severe (174): SIGSEGV, segmentation fault occurred
 472: Image              PC                Routine            Line        Source
 472: ufs_model          000000000506C6BC  Unknown               Unknown  Unknown
 472: libpthread-2.17.s  00002B3D55DFF630  Unknown               Unknown  Unknown
 472: libmpi.so.12       00002B3D55471AF9  MPI_Irecv             Unknown  Unknown
 472: libmpifort.so.12.  00002B3D54EA32A0  mpi_irecv             Unknown  Unknown
 472: ufs_model          00000000041A7FCB  mpp_mod_mp_mpp_tr         126  mpp_transmit_mpi.h
 472: ufs_model          00000000041DEE25  mpp_mod_mp_mpp_re         170  mpp_transmit.inc
 472: ufs_model          0000000004338962  mpp_domains_mod_m         713  mpp_group_update.h
 472: ufs_model          000000000245F3C0  fv_mp_mod_mp_star         762  fv_mp_mod.F90
 472: ufs_model          00000000020CDEC2  dyn_core_mod_mp_d         931  dyn_core.F90
 472: ufs_model          000000000211B93A  fv_dynamics_mod_m         651  fv_dynamics.F90
 472: ufs_model          000000000209AEC0  atmosphere_mod_mp         683  atmosphere.F90
 472: ufs_model          0000000001FCBEAE  atmos_model_mod_m         793  atmos_model.F90
 472: ufs_model          0000000001E9BB0A  module_fcst_grid_         785  module_fcst_grid_comp.F90

So even with the environment variables used from rt.sh we still seem to be running into a memory problem. This log file does not have the explicit "ran out of memory" but I'm assuming that's the SIGTERM issue here. I missed the setting for turning the PET logs on with the esmf profile memory information so there will be a Try03 folder with that info soon.

@JessicaMeixner-NOAA
Copy link
Collaborator

Okay, so I went back and looked at all the log files from runs that @jiandewang made ( /scratch2/NCEPDEV/climate/Jiande.Wang/z-crow-flow/wrk-P7C/COMROOT/R_201*/logs/201*/gfs.forecast.highres.log) and only one of those failed because of Out of Memory, the run I made with memory profiles turned on (/scratch2/NCEPDEV/climate/Jessica.Meixner/p7memissue/Try03) does not seem to be any more than normal? I have seen memory errors fail as the SIGSEGV before, but I guess I'm wondering if we have a memory error or something else?

@yangfanglin
Copy link
Collaborator

the numbers in /scratch2/NCEPDEV/climate/Jiande.Wang/z-crow-flow/wrk-P7C/EXPROOT/R_20120101/config.fv3 do not add up. npe_fv3 cannot be 288 if layout_x_gfs=12 and layout_y_gfs=16. The setting WRTTASK_PER_GROUP_GFS=88 is also odd. You may want to increase WRITE_GROUP_GFS as well.

@JessicaMeixner-NOAA
Copy link
Collaborator

@yangfanglin this is probably an issue of the old versus CROW configuration, the values used in the forecast directory seem fine to me (/scratch2/NCEPDEV/climate/Jiande.Wang/z-crow-flow/wrk-P7C/DATAROOT/R_20120101/2012010100/gfs/fcst.125814):
In nems.configure:
MED_petlist_bounds: 0 1151
ATM_petlist_bounds: 0 1239

in input.nml:
&fv_core_nml
layout = 12,16
io_layout = 1,1

in model_configure:
write_groups: 1
write_tasks_per_group: 88

And 12166=1152 (which is the # in mediator pet list in nems.configure) and +88 = 1240 (which matches the atm pet list)

The 88 might be an odd number but it means that the write group is filling out an entire node and not sharing with another component -- this is the configuration I got to run (after having memory problems w/the write group) for p6.

@JessicaMeixner-NOAA
Copy link
Collaborator

@yangfanglin since we only write output every 6 hours, having 1 write group has always been sufficient in terms of writing efficiency, is there some reason to have multiple write groups for memory?

@jiandewang
Copy link
Collaborator Author

Okay, so I went back and looked at all the log files from runs that @jiandewang made ( /scratch2/NCEPDEV/climate/Jiande.Wang/z-crow-flow/wrk-P7C/COMROOT/R_201*/logs/201*/gfs.forecast.highres.log) and only one of those failed because of Out of Memory, the run I made with memory profiles turned on (/scratch2/NCEPDEV/climate/Jessica.Meixner/p7memissue/Try03) does not seem to be any more than normal? I have seen memory errors fail as the SIGSEGV before, but I guess I'm wondering if we have a memory error or something else?

@JessicaMeixner-NOAA the error in log file depends on which node being detected by system that is having issue so they will not be the same. We are lucky that one of the log file contains "out of memory" info. The fact that all the jobs were being killed by system is a clean indication that there is some memory issue.

@bingfu-NOAA
Copy link

I think we can double the threads to check if it is a memory issue, right?

@jiandewang
Copy link
Collaborator Author

@bingfu-NOAA right now we are using 2 threads and model died at day 18, using 4 threading will slow down the system and we will not be able to finish 35day run in 8hr. In fact in one of my testing, I used 225s for fv3 and model died at day 13.

@JessicaMeixner-NOAA
Copy link
Collaborator

The test that made the 4thread slow down was because I also used a different layout for atm model trying to not use double the nodes. I can try one test with just increasing the thread count (which shouldn't in theory slow it down) just to see if it's really memory or not. It'll probably take a while to get through the queue, but will report back when I have results.

@JessicaMeixner-NOAA
Copy link
Collaborator

Okay, it does not appear that the 4thread slow down was just because I used a smaller atm layout, even using the same atm layout, it's much slower. I don't think we'll make it to the 18 days we reached with 2 threads.

@junwang-noaa
Copy link
Collaborator

junwang-noaa commented Aug 17, 2021 via email

@JessicaMeixner-NOAA
Copy link
Collaborator

Yes, all the components are using the same number of threads, and the simulation slows down which I would not expect.

Yes, the PET log files show that memory is increasing during the integration. You can find that for example here:
/scratch2/NCEPDEV/climate/Jessica.Meixner/p7memissue/Try03 for an atm pet, a write group pet, and ocean. Ice and wave do not have any memory information available. That is a 2 thread job.

The 4 thread run directory can be seen here: /scratch2/NCEPDEV/climate/Jessica.Meixner/p7update/thr4/DATAROOT/testthr4/2013040100/gfs/fcst.25077 with log file here: /scratch2/NCEPDEV/climate/Jessica.Meixner/p7update/thr4/COMROOT/testthr4/logs/2013040100/gfs.forecast.highres.log which only got to day 12 before being killed because the 8 hour wall clock is over.

@JessicaMeixner-NOAA
Copy link
Collaborator

I was able to run a successful 35 day run (the same as the canned case on hera, but through the workflow) on Orion. I did try to just update to the most recent version of ufs-weather-model on hera, and confirmed that also is dying with SIGTERM errors.

@JessicaMeixner-NOAA
Copy link
Collaborator

I ran a test where I set FHMAX=840 (my way of turning off I/O for the atm model) and the model still failed at day 18 (the first run died with a failed node also on day 18).

Based on suggestions from the coupling tag-up, the next steps I will try will be to:
-- Turn off waves
-- Turn debug on (without waves)
-- Run CMEPS on different tasks
-- Turn on/off different recently added options from p7c that were not in p7b
-- Run 1 thread

All other suggestions are welcome. I'll report on results as I get them.

@JessicaMeixner-NOAA
Copy link
Collaborator

As expected, running with 1 thread we only got through 6 days of simulation:
Rundir: /scratch2/NCEPDEV/climate/Jessica.Meixner/p7update/thread1/DATAROOT/thread01/2013040100/gfs/fcst.207953
log: /scratch2/NCEPDEV/climate/Jessica.Meixner/p7update/thread1/COMROOT/thread01/logs/2013040100/gfs.forecast.highres.log

The run without waves is still running, Rundir: /scratch2/NCEPDEV/climate/Jessica.Meixner/p7update/nowave/DATAROOT/nowave02/2013040100/gfs/fcst.154732
log: /scratch2/NCEPDEV/climate/Jessica.Meixner/p7update/nowave/COMROOT/nowave02/logs/2013040100/gfs.forecast.highres.log

Running with different atm physics settings (most of the jobs are still in the queue): 
With   lheatstrg  and lseaspray  set to false: /scratch2/NCEPDEV/climate/Jessica.Meixner/p7memissue/Try08nolheat
With do_ca false: /scratch2/NCEPDEV/climate/Jessica.Meixner/p7memissue/Try07noca
Without MERRA2:
/scratch2/NCEPDEV/climate/Jessica.Meixner/p7memissue/Try06nomerra

A job running with debug is in the queue. 
I'll post more updates when I have them. 

@JessicaMeixner-NOAA
Copy link
Collaborator

The run turning do_ca=false succeeded in running 35 days, all my other tests so far have failed. In the log files with do_ca=true, there are lots of statements such as:

 192:  CA cubic mosaic domain decomposition
 192: whalo =    1, ehalo =    1, shalo =    1, nhalo =    1
 192:   X-AXIS =  320 320 320 320 320 320 320 320 320 320 320 320
 192:   Y-AXIS =  240 240 240 240 240 240 240 240 240 240 240 240 240 240 240 240

However, if you look at the log file for "domain decomposition" this is only written once for different "MOM" and "Cubic" variables. I'm trying to see if I can add memory profile statements to see if this is an issue or not but could this maybe be only done once for ca @lisa-bengtsson? Any other ideas of where we might have memory leaks with do_ca=true?

@lisa-bengtsson
Copy link
Contributor

Sorry, I have not seen that before, did the debug run indicate anything? It is great if you could add memory profile statements, the halo exchange is in update_ca.F90 in the routine evolve_ca_sgs, that could be a start perhaps?

@lisa-bengtsson
Copy link
Contributor

The routine is called update_cells_sgs inside update_ca.F90.

@junwang-noaa
Copy link
Collaborator

junwang-noaa commented Aug 20, 2021 via email

@lisa-bengtsson
Copy link
Contributor

What a relief, thank you for testing! @junwang-noaa since it is just a single change that doesn't change any baseline, could it be merged with an existing PR?

@junwang-noaa
Copy link
Collaborator

junwang-noaa commented Aug 20, 2021 via email

@jiandewang
Copy link
Collaborator Author

Currently we are trying to commit the P7 related issues. We have the FMS PR that does not change results, but we are waiting for the FMS library to be available on the supported platforms, the fv3 dycore update PR is on hold as it changes results.

On Thu, Aug 19, 2021 at 10:33 PM lisa-bengtsson @.***> wrote: What a relief, thank you for testing! @junwang-noaa https://github.com/junwang-noaa since it is just a single change that doesn't change any baseline, could it be merged with an existing PR? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#746 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AI7D6TINFIOH3LQHP5THSFLT5W5GNANCNFSM5CCTWO5A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

@junwang-noaa : can you tell me which PET file you looked at for MOM ?

@junwang-noaa
Copy link
Collaborator

junwang-noaa commented Aug 20, 2021 via email

@lisa-bengtsson
Copy link
Contributor

I created two PR's:
#755
NOAA-PSL/stochastic_physics#44

@SMoorthi-emc
Copy link
Contributor

SMoorthi-emc commented Aug 20, 2021 via email

@junwang-noaa
Copy link
Collaborator

junwang-noaa commented Aug 20, 2021 via email

@SMoorthi-emc
Copy link
Contributor

SMoorthi-emc commented Aug 20, 2021 via email

@junwang-noaa
Copy link
Collaborator

I am wondering if it will fix the restart issue in P7c.

@junwang-noaa
Copy link
Collaborator

junwang-noaa commented Aug 20, 2021

Thanks, Lisa.

I created two PR's:
#755
noaa-psd/stochastic_physics#44

@SMoorthi-emc
Copy link
Contributor

SMoorthi-emc commented Aug 20, 2021 via email

@JessicaMeixner-NOAA
Copy link
Collaborator

While the main p7c memory issue is at least solved enough to run 35 days, since I ran the debug test (without waves) in trying to help debug this issue, I thought I'd post the results here all the same.

Log file:
/scratch2/NCEPDEV/climate/Jessica.Meixner/p7update/debug-test01/COM/debug01/logs/2013040100/gfs.forecast.highres.log
The error:

srun: error: h33m38: task 1152: Segmentation fault (core dumped)
srun: launch/slurm: _step_signal: Terminating StepId=21743413.0
slurmstepd: error: *** STEP 21743413.0 ON h4c03 CANCELLED AT 2021-08-19T13:07:37 ***
forrtl: error (78): process killed (SIGTERM)
Image              PC                Routine            Line        Source
ufs_model          000000000C8B5B2E  Unknown               Unknown  Unknown
libpthread-2.17.s  00002AC3DF0B4630  Unknown               Unknown  Unknown
ufs_model          0000000007617A40  Unknown               Unknown  Unknown
forrtl: error (78): process killed (SIGTERM)
Image              PC                Routine            Line        Source
ufs_model          000000000C8B5B2E  Unknown               Unknown  Unknown
libpthread-2.17.s  00002B5EC095C630  Unknown               Unknown  Unknown
ufs_model          0000000006BE0AA6  cires_ugwpv1_solv         587  cires_ugwpv1_solv2.F90
ufs_model          0000000006A4D033  ugwpv1_gsldrag_mp         675  ugwpv1_gsldrag.F90
ufs_model          000000000537CD8B  oupled_nsstnoahmp        1283  ccpp_FV3_GFS_v16_coupled_nsstNoahmpUGWPv1_physics_cap.F90
ufs_model          0000000005022BBB  ccpp_static_api_m         731  ccpp_static_api.F90
ufs_model          0000000005013A7C  ccpp_driver_mp_cc         188  CCPP_driver.F90
libiomp5.so        00002B5EBED64A43  __kmp_invoke_micr     Unknown  Unknown
libiomp5.so        00002B5EBED27CDA  Unknown               Unknown  Unknown
libiomp5.so        00002B5EBED295B6  __kmp_fork_call       Unknown  Unknown
libiomp5.so        00002B5EBECE7BB0  __kmpc_fork_call      Unknown  Unknown
ufs_model          0000000005011752  ccpp_driver_mp_cc         169  CCPP_driver.F90
ufs_model          000000000252B405  atmos_model_mod_m         346  atmos_model.F90
ufs_model          0000000001EE518A  module_fcst_grid_         787  module_fcst_grid_comp.F90
ufs_model          00000000009806CE  _ZN5ESMCI6FTable1        2036  ESMCI_FTable.C
ufs_model          0000000000984316  ESMCI_FTableCallE         765  ESMCI_FTable.C
ufs_model          000000000076874A  _ZN5ESMCI2VM5ente        1211  ESMCI_VM.C
ufs_model          0000000000981D67  c_esmc_ftablecall         922  ESMCI_FTable.C
ufs_model          00000000007F0B51  esmf_compmod_mp_e        1214  ESMF_Comp.F90
ufs_model          0000000000B73F74  esmf_gridcompmod_        1889  ESMF_GridComp.F90
ufs_model          0000000001EC23BF  fv3gfs_cap_mod_mp        1023  fv3_cap.F90
ufs_model          0000000001EC04DC  fv3gfs_cap_mod_mp         920  fv3_cap.F90

@junwang-noaa
Copy link
Collaborator

junwang-noaa commented Aug 20, 2021

@JessicaMeixner-NOAA would you please create a separate issue for P7c debug error so that we can better track each problem Thanks

@junwang-noaa
Copy link
Collaborator

junwang-noaa commented Aug 20, 2021

@lisa-bengtsson At this morning's code manager meeting, we decided to combine your stochastic physics PR#44 with Denise's CICE memory profile PR#756 (coming out this morning, both do not change results) and get the PR committed today.

@SMoorthi-emc Since your fix may change results, we need to do some testing to see if new baseline is required. Would you please create a CCPP PR?

@lisa-bengtsson
Copy link
Contributor

@junwang-noaa great, thanks. Please let me know if I can do anything else in regards to this PR.

@SMoorthi-emc
Copy link
Contributor

SMoorthi-emc commented Aug 20, 2021 via email

@junwang-noaa
Copy link
Collaborator

@lisa-bengtsson Once the RT passes, Phil needs to review/commit the changes in stochastic physics repo, then we can commit the ufs-weather-model PR.

@GeorgeGayno-NOAA
Copy link
Contributor

Jun, A bug fix is needed for sfsub.F90 The fix is to add an "if" in a do loop. I think George Gayno is creating a "ccpp-physics" issue on this. (this fix may change results if there are problem points) The change should be around line # 2021 " do i=1,len if (nint(slmskl(i)) /= 1) then if (sicanl(i) >= min_ice(i)) then slianl(i) = 2.0_kind_io8 else slianl(i) = zero sicanl(i) = zero endif endif enddo" Moorthi

I just opened an issue: NCAR/ccpp-physics#719

@jiandewang
Copy link
Collaborator Author

@junwang-noaa I tested the latest 3 commit of MOM6 in ufs, all of them have memory leak issue. Below is from Marshall Ward:

**I have started doing more aggressive memory checking, and recently
fixed many of them, but we know of a few that are not yet fixed.

Nearly all of the leaks are because we do not properly call the
MOM_end_*() functions during the finalization, so do not normally
affect the model during the run.

We are planning to enable valgrind testing once we've fixed all the
known leaks, but this is on hold until we finish up some other
projects.**

@junwang-noaa
Copy link
Collaborator

junwang-noaa commented Aug 23, 2021 via email

epic-cicd-jenkins pushed a commit that referenced this issue Apr 17, 2023
* Updated the default CCPP physics option to FV3_GFS_v16

* Updated the default CCPP physics option to FV3_GFS_v16 in config_defaults.sh

Co-authored-by: Natalie Perlin <Natalie@Natalies-MacBook-Air.local>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet