Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calling OpenMPI_jll where it shouldn't be #118

Closed
hammy4815 opened this issue Jun 19, 2023 · 3 comments
Closed

Calling OpenMPI_jll where it shouldn't be #118

hammy4815 opened this issue Jun 19, 2023 · 3 comments

Comments

@hammy4815
Copy link

Hi All,

I am running MUMPS.jl on a powerpc cluster, and have set my MPIPreferences to use the MPI library provided on the cluster (MPIPreferences.use_system_binary()). With this, I initialize my library with:

import MPI
MPI.Init()
import MUMPS
...

It seems to 'work', however it is incredibly slow and tends to break at some seemingly random instances. Some examples of breaking include:

  1. the warning Warning: The call to compilecache failed to create a usable precompiled cache file for 'X MODULE' for all my dependencies
  2. Internal errors where everything just crashes
  3. Everything freezes until my job runs out of time

At the start of my script, when I import MUMPS, I get the warning:

┌ Warning: Error requiring `OpenMPI_jll` from `MPI`
│   exception =
│    OpenMPI_jll cannot be loaded: MPI.jl is configured to use the system MPI library
│    Stacktrace:
│      [1] error(s::String)
│        @ Base ./error.jl:35
...
│     [27] _start()
│        @ Base ./client.jl:522
└ @ Requires /nobackup/users/ihammond/.julia/packages/Requires/Z8rfN/src/require.jl:51

I'm unsure why there is any call to OpenMPI_jll at all, considering I have configured my MPIPreferences to use the system MPI binaries and have verified they are being used (the warning itself also verifies this). Do I need to configure MUMPS as well to know which MPI binaries I am using to solve my problem? Or do I have to compile MUMPS myself for my MPI build. I was under the impression that MUMPS.jl uses MPI.jl to access the MPI api, so I wouldn't need to compile MUMPS for my MPI build, but this may be my mistake.

Thanks,
Ian

@amontoison
Copy link
Member

I compiled MUMPS with different version of MPI (MPICH, OpenMPI, MicrosoftMPI and MPITrampoline) so it should download the relevant tarball for your architecture and MPI backend.
I don't how we can determine which version of the tarball was downloaded:
https://github.com/JuliaBinaryWrappers/MUMPS_jll.jl/releases/tag/MUMPS-v5.6.0%2B0

@hammy4815
Copy link
Author

I don't how we can determine which version of the tarball was downloaded:
https://github.com/JuliaBinaryWrappers/MUMPS_jll.jl/releases/tag/MUMPS-v5.6.0%2B0

I downloaded from this link the correct version and linked it myself, and that seems to fix the issue. Thanks!

@amontoison
Copy link
Member

Great @hammy4815!
I should investigate a little bit to understand how Julia can download a different tarball if we switch the MPI backend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants