-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add settings for compy, for standalone homme and coupled #3117
Conversation
… could not read homme nl file from stream, intel18 build is ok.
…b on 24 skybridge ranks (75 sec) for intel18. i assume no more binding options are needed.
why does intel18 need its own machine file? Why cant the intel19 machine file work with intel18 if the user loads intel18 and the appropriate netcdf. I think we should get the netcdf path from the environment (like the intel19 machine file) instead of hardcoding it (like in the intel18 machine file) My only hard request would be to remove the ADD_LINKER_FLAGS. Its cmake's find_netcdf's job to set that variable. |
By some reason NETCDF_ROOT is not set in environment for intel18. About linker flags -- I do not know exactly how it works in homme or cmake, but libs were not listed in link.txt , so, I added a hack. What is a proper way? |
For linker flags, there are other files that have it:
|
some notes on what I think is the best cmake style approach: anvil: set NetCDF_Fortran_PATH and NetCDF_C_PATH based on output of nf-config and nc-config (needed when fortran and c libraries are in different locations) cori-knl: set NETCDF_DIR based on environment variables If a module sets an environment variable, that results in simpler cmake code. But if the modules do not set an appropriate variable, then we assume they set the path and we use "nc-config --prefix". (or, if fortran and c libraries are in different locations, nc-config and nf-config must be used). If a module sets neither environment or path, then more work will be needed to figure out how the module expects the build system to find the library. theta, bebop, blues: shouldn't need to use ADD_LINKER_FLAGS - something is wrong |
With heavy input from @ambrad : It seems that compy netcdff library is built without its C netcdf dependence.
which says to append C netcdf libs if netcdf F library is not shared. On compy, the library is shared, but on anvil its library is built with netcdf C dependency:
but on compy it is not:
|
Pinging @AaronDonahue , who was lamenting C-less netcdf libraries the other day as well. |
Adding comment on why using pio2 cmake file above:
Also, in homme cmake list: |
The same issue is with intel19 version:
|
…me: added a compy-pgi cache file, removed add_folders from homme cmake build, added stdc++ lib for cime and switched linker there to F linker. eam_theta suite ran after that, not sure what to do about standalone suite which looks for compy.cmake (while there are compy-intel and compy-pgi)
I had to merge master into this branch to have most recent cime for compy. The last commit was done with heavy input from @ambrad . With PGI standalone homme produces a bunch of warnings in cmake
Homme has two switches, HOMME_USE_MKL and HOMME_FIND_BLASLAPACK . If neither is set to ON, then cmake tries to use system blas/lapack. At the moment, system's blas/lapack are not supported on a compute node. When homme is built on login node, there are libs in /usr/lib64/ , but on allocated node (ssh to it, sallow is not enough), the folder does not have blas lib. I am leaving warnings alone because it is likely that compy configuration will change, instead of introducing more changes into homme cmake. Homme standalone tests are running now. |
Not true that eam_theta suite passed. 3 tests failed: thetahy_sl, both ERS tests. For SL and ERS with thetahy, both fails are in init, but there is nbo meaningful message, log file is cut somewhere in hybrid coordinates output. For ERS with thetanh, restart and no-restart files do not pass diffs in 189 fields. |
For the new intel file, 'make baseline' was successful. 1)For intel, 'make baseline' runs many times faster that for phi.
Traceback is on but there is no traceback info in output. I suggest we track failing tests on compy/pgi with a separate PR. |
This is almost ready -- I need to figure out compy-intel, compy-pgi for jenkins runs. Otherwise, @mt5555 would you please review? Not sure how urgent it is. |
@mt5555 please re-review. We need this merged to next so we can get compy pgi tests working. |
…ib/CIME/SystemTests/homme.py
I renamed compy-pgi.cmake to compy.cmake assuming nightlies will only be run with pgi. |
merged to next |
One thing I just noticed: this PR makes the default (compy.cmake = compy-pgi.cmake) for HOMME PGI. But I thought for compy we recently switched back to intel as the default? |
Nightlies run with pgi (today's cdash) and cime picks ${machine}.cmake or something like that as cache file. So, I renamed the file to make it work with nightlies. |
Maint 5.6 merge maint-5.6 branch with merge conflicts resolved Test suite: scripts_regression_tests.py Test baseline: Test namelist changes: Test status: [bit for bit, roundoff, climate changing] Fixes User interface changes?: Update gh-pages html (Y/N)?: Code review:
Maint 5.6 merge maint-5.6 branch with merge conflicts resolved Test suite: scripts_regression_tests.py Test baseline: Test namelist changes: Test status: [bit for bit, roundoff, climate changing] Fixes User interface changes?: Update gh-pages html (Y/N)?: Code review:
…process-pr Fetch changed files from PR in chunks
Adding cmake cache files for compy intel and pgi. Fixes for cxx building and for a linker for building with cime.
Fixes #3225 .
[bfb] for the machines we run on.