-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes for openmpi #7
Changes for openmpi #7
Conversation
@dustinswales I think that this is OK, although I'm kinda disappointed in being forced to use MPI now. It complicates using the model for something that is not needed, and if running one simple case, it actually slows down execution due to the MPI overhead. In addition, although we could run on login nodes due to the low resource serial job use, we have to use either the batch system or an interactive compute node, at least on Hera. Other platforms may allow running Regardless, this isn't your fault, and I'm going to merge this in because it does fix CI, except for the DEPHY one, which is probably failing due to a different reason. |
@grantfirl Yeah, the MPI requirement requires a bit of rework on the host side. The thinking at the time was that requiring the CCPP to use MPI was fine since it was included in spack-stack, and our NCAR/NRL colleagues were fine with this requirement. Plus CCPP is usually embedded within a NWP/GCM which has access to MPI. The thing that worries me is performance. You mentioned a hit due to the MPI overhead, how big are we talking here? I need to figure out why the DEPHY test is failing. |
20% penalty on the case that I ran (ARM_SGP_summer_1997_A using HR3). Also,
interestingly, for both non-MPI and MPI runs, the first run takes twice as
long as repeating the same run again. Weird. Must be some kind of caching
going on.
…On Thu, May 30, 2024 at 1:39 PM Dustin Swales ***@***.***> wrote:
@grantfirl <https://github.com/grantfirl> Yeah, the MPI requirement
requires a bit of rework on the host side. The thinking at the time was
that requiring the CCPP to use MPI was fine since it was included in
spack-stack, and our NCAR/NRL colleagues were fine with this requirement.
Plus CCPP is usually embedded within a NWP/GCM which has access to MPI.
The thing that worries me is performance. You mentioned a hit due to the
MPI overhead, how big are we talking here?
I need to figure out why the DEPHY test is failing.
—
Reply to this email directly, view it on GitHub
<#7 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGCO5UHFRVWQZMGBRXMO3FDZE5P45AVCNFSM6AAAAABIPWKBWGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNBQGQZDCMBUGY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@grantfirl I think this should do the trick.