-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build ACCESS-OM2 with Spack openmpi and run with system openmpi #13
Comments
I have built ACCESS-OM2 with Spack using Spack built openmpi. It has been run with Spack built openmpi resulting in 5x slow down compared to COSIMA ACCESS-OM2 (built, without Spack, and run using system openmpi). That's documented in #6. |
|
Notes
|
Notes
|
@harshula I ran into this error too for a container build (outside of spack) spack/spack#30906 However I'm using the head branch of openmpi, which is greater than 4.1.5. Do you have a config / log somewhere that you kept the set of working / compatible versions of things? |
Hi @vsoch , I've updated the issue description. I can build ACCESS-OM2 with Spack openmpi 4.1.5, but I can't run it with gadi's system openmpi. Is that what you are experiencing? |
I finally got it working after much pain - I've pinned all versions except for prrrte so I should do that. It's very janky but at least it seems to work? https://github.com/researchapps/pmix-compose/blob/611b0e13e381bba1e61f4d2c73ea67d2f9ba5046/Dockerfile |
Hi @vsoch , Have you tried using Spack and Spack's container support ? |
lol No I've never heard of spack, what's that? Just kidding :) Yes and yes, and I'm not interested, thanks! |
Hi @vsoch, We would benefit from knowing more about the pros and cons of Spack's container support. Can you please elaborate on your experience with it? |
I regard the spack team very highly, so don't want to bias you on it, but I'm happy to share my experiences. My general sentiment is that if there is a piece of software that builds well with spack, if you can build that into a container with a spack.yaml, that's a reasonable approach (and indeed we have many lammps bases that do this, here is an example). That particular container started as a spack.yaml and was ported to the container, and we've been able to update it once (with a different spack version) with some difficulty. But now that it's built and provide what we needs, we're good. So if/when you get something working, save absolutely everything, lock file and version wise, so you can build it again. Also the first time you try, do it dynamically so if/when it fails you don't need to start from scratch. You will very likely run into issue if you try to update a version of a dependency of spack itself (this has been my experience). As for using spack containerize, the extent to which something builds depends on the extent to which it would reliably build with spack outside of the container, and that's a mixed bag. I helped maintain a build service called autamus for a bit that exclusively used spack containerize, and it was overwhelming to debug and keep up. This is no fault of spack I think - building is really hard. Dependencies changing break the things that depend on them. So in my experience I've found what is reasonable is maintaining a small family of builds that I care about. For example, on the Flux team I maintain our flux spack packages, and we have a repository https://github.com/flux-framework/spack that does the package build every night, and syncs changes (and helps us open PRs with releases) for Flux. It's made the process of being a maintainer immensely more easy, because I mostly just watch for an occasional failure, and then click a link in an issue to open a PR for a new release. For software that I want to build into containers, my general preference is to choose the design of the container build that is optimal for the software. For production containers (e.g., Kubernetes operators or small developer tools) that use go or rust single binaries, this usually means multi-stage builds that I can get rid of everything aside from the basic runtime dependencies / binary. For most of my containers that are more development environments or similar, I like choosing a same OS base (rocky or debian or ubuntu these days) and then adding on the minimal system level packages that I need. Of course this isn't optimized for HPC niche architectures, but that doesn't tend to be my use case. I think what is most important for me is reproducibility of the build and container over time, and I find system package managers and "the most basic" installs most reliable. It's really satisfying, for example to update the OS of a container and have most of the packages still build and install. The issue with spack in a container is that you can't easily abstract away the spack opt install directory, and if you create a view (as autamus did) it still can be challenging if, for example, you have two versions of a dependency that procure the same file and it's not allowed to create. I might not have good perspective because I have a lot of experience making containers, but I can whip them up fairly quickly, even from scratch. E.g., I started this repository of automated builds recently. That's my high level 0.02 - it really depends. Do you have specific questions or use cases that can help to guide my answer or advice? |
By default, these Spack binaries are built using Docker
Gadi
Even if we convert the binaries to use |
[Updated: 10/08/2023]
This Issue is focused on building ACCESS-OM2 with Spack openmpi (works) and running it with system openmpi (fails)
The text was updated successfully, but these errors were encountered: