-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor EMC_post decomposition from 1D to 2D as part of EMC_post refactoring #274
Comments
And Input I/O of model state fields IS affected by decomposition, just noting. |
Wading through the code. A large fraction of the work will be modifying the I/O to either scatter 2D subdomains rather than 1D contiguous slices (the serial option), or modifying the parallel I/O to get the subdomains. The rest looks like bookkeeping with loop indices but I have not looked for stencil operators yet that need halo exchanges. I need to learn much more about the NetCDF API also. That's the status so far. Working on the standalone FV3 portion first. |
@GeorgeVandenberghe-NOAA Agreed. My plan is to have @JesseMeng-NOAA and @BoCui-NOAA do the bookkeeping parts of changing I loop indices and take care of halo exchanges when necessary. |
Work on standalone post was promising. Many issues just assembling a testcase for inline post for ufs-weather-model. I am trying to find where in the model this is called from and how the model history files are assembled on the I/O group side and it took several days to get a working testcase, then isolate a UPP library from the build so I could work with it and that's where I am now on Jet since WCOSS is down for a week. This process has taken much more time than expected. GWV 3/17 |
For what it's worth the intel tracebackqq('string ',iret) issues a traceback from wherever it's called, then keeps going. I tried that If iret is zero the program terminates Using this it looks like PROCESS( ) a major post routine, is called directly from something in ESMF and there are at least thirty ESMF routines in the call chain above it. Jet intel is currently frozen by a transient system issue on Jet |
Thank you @GeorgeVandenberghe-NOAA for the update. Sound like you're testing stand-alone post and in-line post |
Of course. It's my main project right now.
…On Thu, Mar 18, 2021 at 10:13 AM HuiyaChuang-NOAA ***@***.***> wrote:
Thank you @GeorgeVandenberghe-NOAA
<https://github.com/GeorgeVandenberghe-NOAA> for the update. Sound like
you're testing stand-alone post and in-line post
at the same time? Could you come to next Tuesday's UPP-re-engineering
tag-up?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#274 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANDS4FT7USK6OEZ64W5M2DTTEIC7BANCNFSM4YVYMD5Q>
.
--
George W Vandenberghe
*IMSG* at NOAA/NWS/NCEP/EMC
5830 University Research Ct., Rm. 2141
College Park, MD 20740
***@***.***
301-683-3769(work) 3017751547(cell)
|
After sync'ing with the current EMC_post develop head, I can no longer reproduce the results from that code when I apply my changes to SURFCE.f (found this checking Boi's changes which DO reproduce.. not his problem, MINE) So far the changes consist of changing all arrays dimensioned 1:im to isx:iex but setting isx to 1 and iex to im STILL produces differences from when the im or 1:im dimension is left in. The arrays should be EXACTLY the same shape so .. figuring it out. I was about to submit a PR for the changes for inspection only (not for incorporation) but now I have this issue. |
Also the differences are small from cmp -l |
look at SURFCE.f history on Github, the latest update was Jim's fix to threading violation 7 days ago. The commit prior to this was back in Dec. I believe you started your folk after Dec, right? @WenMeng-NOAA Did latest threading fix changed UPP regression test results? |
@HuiyaChuang-NOAA There are no changed results from Jim's fixes in UPP regression tests. |
I am working from a sync'ed upp develop point. I just recloned it and
added the SURFCE and other necessary fixes so my old fork isn't the
issue. Difference of just two bytes in the middle
of each of the files, suggests something trivial like a pad, is
initializing differently but it still causes a cmp exact regression test to
fail
I will prepare a PR soon to show my differences. Don't merge the PR, just
examine it.
…On Thu, Apr 1, 2021 at 12:58 PM HuiyaChuang-NOAA ***@***.***> wrote:
After sync'ing with the current EMC_post develop head, I can no longer
reproduce the results from that code when I apply my changes to SURFCE.f
(found this checking Boi's changes which DO reproduce.. not his problem,
MINE) So far the changes consist of changing all arrays dimensioned 1:im to
isx:iex but setting isx to 1 and iex to im STILL produces differences from
when the im or 1:im dimension is left in. The arrays should be EXACTLY the
same shape so .. figuring it out. I was about to submit a PR for the
changes for inspection only (not for incorporation) but now I have this
issue.
look at SURFCE.f history on Github, the latest update was Jim's fix to
threading violation 7 days ago. The commit prior to this was back in Dec. I
believe you started your folk after Dec, right? @WenMeng-NOAA
<https://github.com/WenMeng-NOAA> Did latest threading fix changed UPP
regression test results?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#274 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANDS4FXGCKGPLJUDIFU46L3TGSQ3HANCNFSM4YVYMD5Q>
.
--
George W Vandenberghe
*IMSG* at NOAA/NWS/NCEP/EMC
5830 University Research Ct., Rm. 2141
College Park, MD 20740
***@***.***
301-683-3769(work) 3017751547(cell)
|
Sometimes, the changes will make UPP grib2 file size changed. In UPP regression tests, we add field by field value comparison. It would be fine for no unexpected changed results. |
File sizes didn't change . Two bytes inside each of them in the middle,
did.
…On Thu, Apr 1, 2021 at 1:12 PM WenMeng-NOAA ***@***.***> wrote:
Sometimes, the changes will make UPP grib2 file size changed. In UPP
regression tests, we add field by field value comparison. It would be fine
for no unexpected changed results.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#274 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANDS4FQUXQSKPRRSTJURXRTTGSSPTANCNFSM4YVYMD5Q>
.
--
George W Vandenberghe
*IMSG* at NOAA/NWS/NCEP/EMC
5830 University Research Ct., Rm. 2141
College Park, MD 20740
***@***.***
301-683-3769(work) 3017751547(cell)
|
@GeorgeVandenberghe-NOAA can you point me at your regression test output directory. I will take a look. |
I was asking for a code evaluation only. Regression test on Jet only
passes for ONE (the one examined) with fields the same but two bytes
different in Grib files. I submitted
the PR for an eyeball of my code only
Test output is on
/mnt/lfs4/HFIP/hfv3gfs/gwv/post/emcpost/reg/fv3r_2019062000. Base files
for comparison are in ./BASE in this directory
I could do this for all of the others easily but am still working out a
byte difference issue in THIS one.
…On Thu, Apr 1, 2021 at 3:13 PM HuiyaChuang-NOAA ***@***.***> wrote:
@GeorgeVandenberghe-NOAA <https://github.com/GeorgeVandenberghe-NOAA> can
you point me at your regression test output directory. I will take a look.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#274 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANDS4FRLNDEXHVL2LED5KATTGTAVZANCNFSM4YVYMD5Q>
.
--
George W Vandenberghe
*IMSG* at NOAA/NWS/NCEP/EMC
5830 University Research Ct., Rm. 2141
College Park, MD 20740
***@***.***
301-683-3769(work) 3017751547(cell)
|
I found the following line in CLDRAD.f
Why do we need full domain? Concerned I may miss others that need full domain although so far I am only redimensioning partial J domain arrarys replacing IM with isx:iex |
@GeorgeVandenberghe-NOAA My understanding is full_cld is used for calling routine AllGETHERV for hallo exchange? See line 938. @HuiyaChuang-NOAA may chime in for detail. |
Wen is right, FULL_CLD(IM,JM) must be defined for full domain due to subroutine allgetherv(mpi_allgather) where mpi_allgather is called there and grid1 must have dimension (im,jm). I took a note at document https://docs.google.com/spreadsheets/d/10jlqaBHlcg8xHHc4kH1JWJbTPGMcZeZLNbMCszdza2c/edit#gid=0 |
Has anyone looked at the "inspection only" PR submitted late last week, April 1 or so for second opinions and comments? |
@GeorgeVandenberghe-NOAA I haven't got the chance to look at it yet. I might do this week. |
Ok. It's not slowing me down so don't be rushef
…On Tuesday, April 6, 2021, WenMeng-NOAA ***@***.***> wrote:
@GeorgeVandenberghe-NOAA <https://github.com/GeorgeVandenberghe-NOAA> I
haven't got the chance to look at it yet. I might do this week.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#274 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANDS4FUODVBHLXJWNB4J46DTHOIURANCNFSM4YVYMD5Q>
.
--
George W Vandenberghe
*IMSG* at NOAA/NWS/NCEP/EMC
5830 University Research Ct., Rm. 2141
College Park, MD 20740
***@***.***
301-683-3769(work) 3017751547(cell)
|
I will start to look at it this week. |
There is a data structure datapd. THe following line in CLDRAD.f suggests it's used as some kind of halo pad. Could someone describe this in more detail before I figure out how |
My understanding is that this array is for writing field values in GRIB2 in full domain. You may see it a lot of routines. I would defer this question to @HuiyaChuang-NOAA or @junwang-noaa for detail. |
This is how it's allocated. I haven't changed it
allocate(datapd(im,1:jend-jsta+1,nrecout+100))
…On Wed, Apr 7, 2021 at 2:38 PM WenMeng-NOAA ***@***.***> wrote:
My understanding is that this array is for writing field values in GRIB2
in full domain. You may see it a lot of routines. I would defer this
question to @HuiyaChuang-NOAA <https://github.com/HuiyaChuang-NOAA> or
@junwang-noaa <https://github.com/junwang-noaa> for detail.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#274 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANDS4FWLS3SJNBH254NL7LDTHSRDVANCNFSM4YVYMD5Q>
.
--
George W Vandenberghe
*IMSG* at NOAA/NWS/NCEP/EMC
5830 University Research Ct., Rm. 2141
College Park, MD 20740
***@***.***
301-683-3769(work) 3017751547(cell)
|
@GeorgeVandenberghe-NOAA I have reviewed your inspection PR which makes sense to me. I sent you my comments on specific places. Thanks! |
I will change the variable names to be consistent and review the timer
changes. I believe I either found the timer not entirely working or only
reporting to the integer truncated second . MPI_WTIME is better than
rtc() on linux systems. The timer is also reporting milliseconds and I
have a preference for seconds as the unit (with a resolution of 10::-5
seconds or better)
…On Thu, Apr 8, 2021 at 9:06 AM WenMeng-NOAA ***@***.***> wrote:
@GeorgeVandenberghe-NOAA <https://github.com/GeorgeVandenberghe-NOAA> I
have reviewed your inspection PR which makes sense to me. I sent you my
comments on specific places. Thanks!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#274 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANDS4FRZXUVXUO5MGUMWIX3THWS3PANCNFSM4YVYMD5Q>
.
--
George W Vandenberghe
*IMSG* at NOAA/NWS/NCEP/EMC
5830 University Research Ct., Rm. 2141
College Park, MD 20740
***@***.***
301-683-3769(work) 3017751547(cell)
|
Yes, Wen was right. Jun created this array to store all data to be written to Grib2 output thus its dimensions can not be changed. |
Done all my shares of 2D DECOMPOSITION subroutines including those I took over from Bo. |
Thanks Jesse' support and took over many subroutines' 2d decomposition. Now
I am also done with all my tasks. I ran Wen's regression test with numx=1,
the regression test worked for most of the models except for nmmb RH on
some sigma levels, the similar results as Jesse's.
Bo
…On Wed, Nov 3, 2021 at 9:45 AM Jesse Meng ***@***.***> wrote:
Done all my shares of 2D DECOMPOSITION subroutines including those I took
over from Bo.
Wen's regression test in
venus:/u/Wen.Meng/noscrubd/ncep_post/post_regression_test_new
still works and I added numx=1 in itag namelist. numx=1 passes regression
test for most of the models except for nmmb RH on just a few sigma levels,
not all levels. I will look into this.
1125:462230579:RH:0.47-1 sigma layer:rpn_corr=0.992464:rpn_rms=2.55337
1126:462562119:RH:0.47-0.96 sigma layer:rpn_corr=0.993236:rpn_rms=2.52673
1127:462897214:RH:0.18-0.47 sigma layer:rpn_corr=0.996433:rpn_rms=1.58959
1128:463197281:RH:0.84-0.98 sigma layer:rpn_corr=0.980175:rpn_rms=3.61572
1129:463563671:MCONV:0.85-1 sigma
layer:rpn_corr=0.999514:rpn_rms=6.62663e-09
1482:487727642:RH:0.47-1 sigma layer:rpn_corr=0.992464:rpn_rms=2.55337
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#274 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMYNKUDPT7EBHYFK6AS4VR3UKFDJHANCNFSM4YVYMD5Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
--
Bo Cui
IMSG at NOAA/NWS/NCEP/EMC
5830 University Research Ct., Rm. 2063
College Park, MD 20740
***@***.***
301-683-3710
|
So.ooo Note exch.f itself remains full of debug code that should be removed as soon as the 2D decomposition is debugged for all domains and all reasonable decompositions. |
I have run into a snag with the boundary halos. The subdomains are dimensioned with the lowest I to be the greater of ista-2, or 1. The 1 is causing issues for I=1, no place to put the cyclic IM value to the left. An analagous situation occurs for I=im. Pondering how to cleanly fix this. |
Summary from 11/23/2021 2D decomposition meeting today:
|
@JesseMeng-NOAA Please check the following two routines and lines and see if (i,j,jj) would be replaced by (i,ii,j,jj). MISCLN.f:4177:! $omp parallel do private(i,j,jj) SURFCE.f:463:!$omp parallel do private(i,j,jj) |
Yes all should be (i,ii,j,jj). Thanks for double checking. I will update those.
|
@JesseMeng-NOAA @BoCui-NOAA @GeorgeVandenberghe-NOAA @WenMeng-NOAA @fossell
Action items:
|
Tested George's fix for EXCH.f and identical results were reproduced. Code pushed to github. |
Summary from 1/4/2022 decomposition meeting: |
Summary from 1/18 2D decomposition tag-up
|
Summary from 2/15 2D decomposition meeting:
|
The interface between UPP/post_2d_decomp and ufs-weather-model/FV3 inline post has been developed for both gfs and regional models. To test this functionality, Then run the UPP/post_2d_decomp regression test. Turn on the WRITE_DOPOST flag in More details can be found in this menu, |
I started the documentation after breaking from continuing WCOSS
crises. Below is a very high level schematic. Information on the actual
variables changed will follow later.
! The 1D decomposition can read state from a model forecast file, either
by reading on rank 0
! and scattering, or by doing MPI_IO on the model history file using
either nemsio, sigio,
! or netcdf serial or parallel I/O. Very old
! post tags also implement the more primitive full state broadcast or (a
! performance bug rectified 10/17) read the entire state on all tasks.
This
! is mentioned in case a very old tag is encountered. The 2D
decomposition
! only supports MPI_IO for the general 2D case but all I/O methods remain
! supported for the 1D special case of the 2D code. This 1D special case
! works for all cases currently supported by older 1D tags and branches.
!
! to repeat, ONLY 2D NETCDF PARALLEL I/O WILL BE SUPPORTED FOR THE
! GENERAL CASE OF 2D DECOMPOSITION.
!
! **************************** 2D design enhancements
************************
!
!
!
! The 2D decomposition operates on subdomains with some latitudes and
! some longitudes. The subdomains are lonlat rectangles rather than
! strips. This means state must be chopped into pieces in any
! scatter operation and the pieces reassembled in any gather
! operation that requires a continuous in memory state. I/O and halo
! exchanges both require significantly more bookkeeping.
!
!
! The structural changes needed for the 2D decomposition are
! implemented in MPI_FIRST.f and CTLBLK.f! CTLBLK.f contains
! numerous additional variables describing left and right domain
! boundaries. Many additional changes are also implemented in EXCH.f
! to support 2D halos. Many additional routines required addition of
! the longitude subdomain limits but changes to the layouts are
! handled in CTLBLK.f and the "many additional routines" do not
! require additional changes when subdomain shapes are changed and
! have not been a trouble point.
!
!
! Both MPI_FIRST and EXCH.f contain significant additional test code
! to exchange arrays containing grid coordinates and ensure EXACT
! matches for all exchanges before the domain exchanges are
! performed. This is intended to trap errors in the larger variety of
! 2D decomposition layouts that are possible and most of it can
! eventually be removed or made conditional at build and run time.
!
!
…On Tue, Mar 1, 2022 at 8:57 AM WenMeng-NOAA ***@***.***> wrote:
Assigned #274 <#274> to
@GeorgeVandenberghe-NOAA <https://github.com/GeorgeVandenberghe-NOAA>.
—
Reply to this email directly, view it on GitHub
<#274 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANDS4FTM3EGKKBARI6RLLCDU5YO5NANCNFSM4YVYMD5Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were assigned.Message ID:
***@***.***>
--
George W Vandenberghe
*IMSG* at NOAA/NWS/NCEP/EMC
5830 University Research Ct., Rm. 2141
College Park, MD 20740
***@***.***
301-683-3769(work) 3017751547(cell)
|
Thank you, George. This high level documentation looks good. Looking forward to your documentation on variables you added/updated and their descriptions |
Additional post documentation. This is NOT in any routines but is a
separate little document (so far)
! the following is found in CTLBLK.f and shared in the rest of UPP
through use of CTLBLK.mod
! im. integer full longitude domain
! jm integer full latitude domain
!
! jsta integer start latitude on a task subdomain
! jend integer end latitude on a task subdomain
! ista integer start longitude on a task subdomain
! iend integer end longitude on a task subdomain
! ista_2l integer start longitude -2 of the subdomain
! iend_2u integer end longitude +2 of the subdomain
! jsta_2l integer start latitude -2 of the subdomain
! jend_2u integer end latitude +2 of the subdomain
! The shape of the subdomain is ista_2l:iend_2u,jsta_2l:jend_2u so it
includes the halos although the halos
! are not populated until exhange is done in EXCH.f
!
! because of halos we need more bounds defined
!
! jsta_m single latitude below begin latitude of subdomain.
! jend_m single latitude above end latitude of subdomain
! jsta_m2 second latitude below begin latitude of subdomain .
Apparently not used currently in compuations but subdomain shape uses this
! jend_m2 second latitude above end latitude of subdomain.
apparently not used currently but subdomain shape uses this
!
! ista_m single longitude before begin longitude
! iend_m single longitude after end longitude
! ista_m2 second longitude before begin longitude
! iend_m2 second longitude after end longitude
! ileft. MPI rank containing the last longitude before ista_m
! iright. MPI rank containing the first longitude after iend_m
! iup MPI rank containing the first latitude after jend
! idn MPI rank containing the last latitude before jsta
! ileftb. MPI rank containing the last longitude before ista_m but for
cyclic boundary conditions where "last" at the beginning
! is the other end of the domain (apparently unused and replaced
with local calculation)
! irightb. MPI rank containing the first longitude after iend_m but for
cyclic boundary conditions where "first" at the beginning
! is the other end of the domain (apparently unused and replaced
with local calculation)
…On Tue, Mar 1, 2022 at 8:57 AM WenMeng-NOAA ***@***.***> wrote:
Assigned #274 <#274> to
@GeorgeVandenberghe-NOAA <https://github.com/GeorgeVandenberghe-NOAA>.
—
Reply to this email directly, view it on GitHub
<#274 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANDS4FTM3EGKKBARI6RLLCDU5YO5NANCNFSM4YVYMD5Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were assigned.Message ID:
***@***.***>
--
George W Vandenberghe
*IMSG* at NOAA/NWS/NCEP/EMC
5830 University Research Ct., Rm. 2141
College Park, MD 20740
***@***.***
301-683-3769(work) 3017751547(cell)
|
@GeorgeVandenberghe-NOAA - Is this second piece of documentation you posted going to go into the same overview document as the previous schematic text you provided? I'm mocking up some pages to be included in the formal documentation, so just wanted to check. |
Yes. Its in the overview document. Not an individual routine docblock.
On Monday, March 14, 2022, Kate Fossell ***@***.***> wrote:
@GeorgeVandenberghe-NOAA - Is this second piece of documentation you
posted going to go into the same overview document as the previous
schematic text you provided? I'm mocking up some pages to be included in
the formal documentation, so just wanted to check.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you were mentioned.<
…--
George W Vandenberghe
*IMSG* at NOAA/NWS/NCEP/EMC
5830 University Research Ct., Rm. 2141
College Park, MD 20740
***@***.***
301-683-3769(work) 3017751547(cell)
|
Summary from 3/15/2022 2D decomposition meeting:
|
@HuiyaChuang-NOAA Since Jesse's UFS PR is available on UFS repository, you might invite model developers from GFS, RRFS, and HAFS to test this upp 2d decomposition capability. @junwang-noaa may chime in. |
The UPP standalone regression tests failed at RRFS and HAFS due to syncing PR #441. The regional FV3 read interface INITPOST_NETCDF.f wasn't updated with 2D decomposition. The workaround is turning off RRFS and HAFS in your UPP RT tests. I would fix the issue after PR #453 committed (Unify regional and global FV3 read interfaces). |
Summary from 3/29/2022 meeting:
|
@JesseMeng-NOAA @BoCui-NOAA @WenMeng-NOAA @GeorgeVandenberghe-NOAA @fossell
Action items
|
…es and add support for global/regional grib2 functionality in chgres_cube (NOAA-EMC#274) * Remove all references to /lfs3 on Jet * Add Ben and Ratko to the CODEOWNERS file * Replace hard-coded make_orog module file with build-level module file in UFS_UTILS * Remove hard-coded make_sfc_climo module file * Add changes for merged chgres_cube code * Add changes for merged chgres_cube code * Minor tweak to FCST_LEN_HRS default in config.community.sh * Changes to make the release version of chgres_cube run in regional_workflow * Changes for regional_grid build on Jet * Changes to regional_grid build for Hera * Change regional_grid makefile for hera * Remove leading zero from FCST_LEN_HRS in config.community.sh * Remove /sorc directory * Remove build module files for codes originally in the regional_workflow repository. Remove run-time make_grid module file for all platforms. Will be sourced from UFS_UTILS from now on. * Update regional grid template for newest code * Copy make_grid module file from UFS_UTILS * Add make_grid.local file for most platforms * Remove alpha and kappa parameters from the regional_grid namelist * Modifications to file type conventions in the chgres_cube namelist for FV3GFS and GSMGFS nemsio files * Set convert_nst=False for global grib2 FV3GFS files when running chgres_cube * Add tracers back into nemsio file processing * Changes to the make_lbcs ex-script (remove all surface-related variables) * Fix for modulefiles * Fixes after merging authoritative repo into fork * Add Thompson climo to chgres_cube namelist for appropriate external model/SDF combinations * Commit new locations for Thompson climo fix file * Change FIXsar to FIXLAM * Change gfs_bndy.nc to gfs.bndy.nc * Move file * Bug fixes to setup.sh and exregional_make_ics.sh * Add support for NAM grib2 files * Path fix * Typo fix * Fix extension on UPP grib2 files * Bug fix for if statement * Add .grib2 extension to soft links * Fix nsoill_out values based on LSM scheme in CCPP suite * Fix grib2 extensions * Add if statement for varmap tables when using Thompson MP and initializing from non-RAP/HRRR data * Final modifications to support NAM grib2 files in regional_workflow * Set climo as default for soil variables when using HRRRX (user will need change this if they know these variables are available for the dates they are running). * Add FV3_CPT_v0 to varmap if statement * Changes to post file names to make model lowercase and move ${fhr} to three values instead of two * Change "rrfs" to "${NET}" instead * Revert "Add FV3_CPT_v0 to varmap if statement" This reverts commit b04ad0b3c8c554f664c6790030a4f33b5a395023. * Add HALO_BLEND back into config_defaults.sh * Set do_deep=false for RRFS_v1beta and other RAP/HRRR suites * Fix if statement for lsoil variable in generate step. * Remove link to fixed file in UFS_UTILS source code directory, now points to FIXgsm * Typo in config_defaults.sh and remove unused SDFs from the make_ics/lbcs scripts, from Gerard's PR * Add FV3_CPT_v0 to list of SDFs that use the GFS varmap table * Remove wgrib2 module load from local make_ics/lbcs files * Use correct system calls for setting up Cheyenne python environment now that these are proper modulefiles Co-authored-by: Michael Kavulich, Jr <kavulich@ucar.edu>
EMC_post is currently decmposed on latitude (J) only. This is adequate for several more years but since post is generally being refactored, now is a good time to make the jump to 2D. A second goal is to make the 2D decomposition either flexible, or just have it mimic the ufs-weather-model decomposition so developers working on both codes can exploit commonality. This will be a modestly difficult project with most effort, figuring out the plumbing of the code (in progress). This issue is being created for management and project leader tracking and per EMC management directives and also best practices, results should be tracked through this Github issue or slack, NOT email.
There are many OTHER scaling issues in the post that are not affected by the decomposition. Most of the issues are orthogonal to the decomposition though and can be worked independently. The most salient is input I/O of model state fields in the standalone post.
By 03/01/2021:
The offline post testing procedure provided by Jesse can be found at here
The inline post testing procedure provided by Bo can be found at here
Jesse's FV3 branch can be found at here
The text was updated successfully, but these errors were encountered: