-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add 200hPa VPOT and STRM to SFS #951
Conversation
@WenMeng-NOAA Where can I find GFS V16 model outputs for the comparison with CFS? |
@KarinaAsmar-NOAA /lfs/h1/ops/prod/com/gfs/v16.3/gfs.//atmos |
sorc/ncep_post.fd/MDL2P.f
Outdated
@@ -1769,6 +1775,18 @@ SUBROUTINE MDL2P(iostatusD3D) | |||
endif | |||
ENDIF | |||
ENDIF | |||
|
|||
!*** K. ASMAR - SAVE ALL P LEVELS OF U/V WINDS AT GLOBAL GRID FOR COMPUTING VELOCITY POTENTIAL AND STREAMFUNCTION |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KarinaAsmar-NOAA Are there any reasons for allocating full domain to these new arrays? The UPP is coded using MPI. It is not recommend to allocate array in full domain due to the risk of consuming consume all memory at runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WenMeng-NOAA The sptrunv.f routine needs the input winds to be in global scale. I had tried in original iterations to only use the winds in the sliced ista/iend,jsta/jend grids, but the resolutions were not sufficient to get good results. CFS is also using the winds in the full IM,JM domain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KarinaAsmar-NOAA This setting is not accepted in UPP. Please work with @JesseMeng-NOAA and @HuiyaChuang-NOAA for improving.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JesseMeng-NOAA @HuiyaChuang-NOAA What would be the best way to implement the full domain of the isobaric winds for input into sptrunv.f? Should this be done in a separate subroutine in UPP? Or should the CFS program be imported into UPP to read in the USL/VSL winds in the full domain from the GRIB2 file (it would need to be converted to GRIB)? Or something else?
sorc/ncep_post.fd/MDL2P.f
Outdated
ELSE | ||
JCAP = JM-1 | ||
ENDIF | ||
CALL SPTRUNV(0,JCAP,IDRT,IM,JM, & |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KarinaAsmar-NOAA I would suggest you start a new subroutine for the code of calculating velocity potential and stream function. You might refer to subroutines (CAL*.f) in UPP e.g. CALHEL.f.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WenMeng-NOAA Should I include in the new subroutine the collection of the USL/VSL winds in the full domain (as done here? Or should that be done in MDL2P.f or in a separate subroutine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KarinaAsmar-NOAA
@WenMeng-NOAA
My understanding, Wen please verify, after we obtained USL/VSL in MDL2P, call a new subroutine outside MDL2P and move the code of calculating velocity potential and stream function to that new subroutine, which includes
- call COLLECT_ALL
- call SPTRUNV with if (me .eq. 0) only; you probably need to call mpi_barrier after call SPTRUNV
- return the full domain CHI and PSI back to MDL2P
Back in MDL2P assign the subset of CHI and PSI to GRID1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JesseMeng-NOAA Could the USL/VSL collection be moved into new subroutine? If not, it can remain in MDL2P.f. Your proposed new subroutine look good for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JesseMeng-NOAA Where should the USL/VSL winds be saved to all LSM levels (e.g, in MDL2P.f or the new subroutine)? At the moment, they are computed inside the LP=1,LSM loop in MDL2P.f.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KarinaAsmar-NOAA @WenMeng-NOAA
Calculate and save USL/VSL in MDL2P. Pass (USL, VSL, CHI, PSI) to the new subroutine along with anything else SPTRUNV needs. COLLECT_ALL can be done in the new subroutine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WenMeng-NOAA @JesseMeng-NOAA The latest changes have been pushed to the branch. The new subroutine CALCHIPSI.f collects the winds and calls sptrunv. VPOT and STRM are then sliced and saved in MDL2P.f. The plots comparing the new changes with CFS are here:
VPOT and STRM Plots v2.pptx
@KarinaAsmar-NOAA Your branch did not pass the Intel Linux Build / setup. Possibly some conflicts during the merge. Please checkout the latest UPP/develop to a new directory and bring in your changes and commit all at once. Hope this works. |
05aa62f
to
97ea655
Compare
f6b06ea
to
309d4c0
Compare
@JesseMeng-NOAA I checked out the latest UPP/develop and added the 8 new files in one commit, but the Linux build is still not working. Is this something that can be broken in any of the files modified for this PR? |
Does it compile successfully on wcoss2? |
I will try comitting the files one by one to see which ones break the Linux build. |
The Linux CI build has had issues recently, if you can verify successful building on your side, then the CI failure for Intel Linux can be ignored for now. Sorry about any confusion there! |
Yes, I just compiled it on wcoss2 without issues. |
Thanks! @FernandoAndrade-NOAA @JesseMeng-NOAA The branch compiles and generates the grib2 file in wcoss2. |
@KarinaAsmar-NOAA @JesseMeng-NOAA From my UPP standalone testing, the velocity potential and stream function process increase runtime by at least 30% runtime. Please work on optimizing the code. |
sorc/ncep_post.fd/CALCHIPSI.f
Outdated
integer :: JCAP, I, J, L, IERR | ||
REAL, dimension(ISTA:IEND,JSTA:JEND,LSM), intent(in) :: UISO, VISO | ||
|
||
real, dimension(IM,JM,LSM) :: IN_UWIND, IN_VWIND, OUT_UWIND, OUT_VWIND, DIV, ZO, CHI_OUT, PSI_OUT, CHI, PSI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KarinaAsmar-NOAA @JesseMeng-NOAA You might consider using allocable variables and reducing the number of variables in the full domain to to conserve memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WenMeng-NOAA I'll try with allocatable variables and let you know.
@WenMeng-NOAA Can you try the standalone test with the new push and see if the runtime improved? |
@KarinaAsmar-NOAA The new commit focuses on memory conservation. However, the runtime has not been improved yet. |
@KarinaAsmar-NOAA Please point me to your job script on wcoss2. |
@WenMeng-NOAA @KarinaAsmar-NOAA From wcoss2 please copy this file to your directory Also from your job script find the following line Try it and let me know if the runtime can be reduced. |
The scripts are in /lfs/h2/emc/vpppg/noscrub/karina.asmar/test/UPP. For SFS it is submit_run_sfs_wcoss2.sh, and for GFSv16 it is submit_run_gfsv16_wcoss2.sh. |
@JesseMeng-NOAA @WenMeng-NOAA With these changes to the source code, my jobs with SFS data run in <1min and with GFS data they run in ~8min (they used to take more than 12min). This was using 1 thread in the job script. Using 4 threads, the GFS job crashes with an MPI Barrier error, and the SFS job runs but the plots do not look right. |
|
Thanks for clarifying. I'll check on this. |
Where is subroutine SPTRUNV? It's not in ncep_post.fd |
It is here. It is linked to UPP. |
You may also want to see the CFS genpsiandchi.fd program. |
@WenMeng-NOAA @KarinaAsmar-NOAA |
Thanks @JesseMeng-NOAA ! @WenMeng-NOAA Please let us know if sptrunv can be implemented into UPP with the thread limitations, or if another approach is needed for VPOT/STRM. I am not sure if sptrunv was originally designed for another type of processing environment. |
@KarinaAsmar-NOAA @JesseMeng-NOAA Implementing OpenMP is the new subroutine calchipsi.f is optional. The routine sptrunv is part of the sp library and does no need to be implemented in UPP. The main concern with this PR is runtime. From the UPP code management perspective, we can't accept a new calculation which significantly increases runtime. |
Karina will work on an alternative approach to generate stream function and velocity potential from a standalone utility in global-workflow instead of calculating in UPP. This PR is on hold. |
@KarinaAsmar-NOAA You have been working on a new approach calculating of stream function and velocity potential on discrete grids , so I will close this PR. |
@WenMeng-NOAA Sounds good. I will submit a new one when the new method is ready. |
CPC has requested 200hPa Velocity Potential (VPOT) and Streamfunction (STRM) to be included in SFS. The issue describing the request is #902. This PR includes VPOT and STRM added to the UPP and SFS products lists, as well as the incorporated calculations from the CFS operational package (a version of the source code is here.
The grib2 file with these changes is in Hera /scratch1/NCEPDEV/stmp2/Karina.Asmar/sfs_test/post_sfs_1991052100/GFSPRS.GrbF24.