Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

{{mpi}}-mpicc not for use in recipes ? #91

Closed
1 task done
carterbox opened this issue Feb 25, 2022 · 22 comments · Fixed by conda-forge/conda-forge.github.io#1752
Closed
1 task done

{{mpi}}-mpicc not for use in recipes ? #91

carterbox opened this issue Feb 25, 2022 · 22 comments · Fixed by conda-forge/conda-forge.github.io#1752

Comments

@carterbox
Copy link
Member

carterbox commented Feb 25, 2022

Issue:
In conda-forge/staged-recipes#14726, the linux build cannot resolve dependencies in the build environment:

conda create -n test gcc_linux-64=10 gxx_linux-64=10 gfortran_linux-64=10 openmpi-mpifort

The resolution error is:

Package gfortran_linux-64 conflicts for:
openmpi-mpifort -> gfortran_linux-64[version='7.*|9.*|7.3.0.*']
gfortran_linux-64=10

Conda forge pinnings current show that fortran compiler version should be 10.

So why is this package still depending on versions 7,9? Is there a reason or just the package needs to be rerendered?


Environment (conda list):
$ conda list


Details about conda and system ( conda info ):
$ conda info
     active environment : base
    active env location : /home/dching/miniconda3
            shell level : 1
       user config file : /home/dching/.condarc
 populated config files : /home/dching/.condarc
          conda version : 4.11.0
    conda-build version : not installed
         python version : 3.8.8.final.0
       virtual packages : __cuda=11.6=0
                          __linux=5.13.0=0
                          __glibc=2.31=0
                          __unix=0=0
                          __archspec=1=x86_64
       base environment : /home/dching/miniconda3  (writable)
      conda av data dir : /home/dching/miniconda3/etc/conda
  conda av metadata url : None
           channel URLs : https://conda.anaconda.org/conda-forge/linux-64
                          https://conda.anaconda.org/conda-forge/noarch
                          https://repo.anaconda.com/pkgs/main/linux-64
                          https://repo.anaconda.com/pkgs/main/noarch
                          https://repo.anaconda.com/pkgs/r/linux-64
                          https://repo.anaconda.com/pkgs/r/noarch
          package cache : /home/dching/miniconda3/pkgs
                          /home/dching/.conda/pkgs
       envs directories : /home/dching/miniconda3/envs
                          /home/dching/.conda/envs
               platform : linux-64
             user-agent : conda/4.11.0 requests/2.27.1 CPython/3.8.8 Linux/5.13.0-30-generic ubuntu/20.04.4 glibc/2.31
                UID:GID : 1000:1000
             netrc file : None
           offline mode : False

@carterbox
Copy link
Member Author

Maybe it rebuilding with updated pinning (from last week) will fix this problem. There was a change the compiler version in the pinnings.

conda-forge/conda-forge-pinning-feedstock#2010

@leofang
Copy link
Member

leofang commented Feb 25, 2022

I doubt #92 is the right solution but it wouldn't hurt to consult @conda-forge/core: why would the compiler pinning update require other packages to rebuild? Shouldn't it be a smooth, non disruptive process?

@carterbox
Copy link
Member Author

carterbox commented Feb 26, 2022

It looks like openmpi-mpifort is a metapackage which ensures that the fortran compiler is the same one that was used to compile openmpi. So, if openmpi was never built with gfortran=10, then you cannot find an openmpi-mpifort that is compatible with gfortran=10.

The problem that I'm experiencing is that a feedstock is using the {{ compiler('fortran') }} directive which does want gfortran=10 AND openmpi-mpifort, but there is no openmpi that was built with gfortran=10 yet.

I think the pinnings determine which compiler versions are used to build feedstocks, right? So if they recently updated gfortran=10 to the pinnings, then we would need to rebuild openmpi to have a build compatible with gfortran=10?

Unfortunately, it seems like this feedstock is building only with gfortran=7, and I don't understand why. Basically, I don't understand what these selectors in the pinnings are doing:

  - 10                         # [linux]
  - 5                          # [win64]
  - 7                          # [os.environ.get("CF_CUDA_ENABLED", "False") == "True" and linux64]
  - 9                          # [os.environ.get("CF_CUDA_ENABLED", "False") == "True" and linux64]
  - 10                         # [os.environ.get("CF_CUDA_ENABLED", "False") == "True" and linux64]
  - 10                         # [os.environ.get("CF_CUDA_ENABLED", "False") == "True" and linux64]

EDIT: These selectors are saying that non-cuda linux builds with gfortran=10, then each of the cudatoolkit versions has a different compiler version.

@carterbox
Copy link
Member Author

IMO, the problem is that openmpi has no non-cuda build for linux64. It is only being built with cuda=10. Thus, while the other non-cuda packages are now building with gfortran=10, openmpi only builds with gfortran=7.

So it becomes impossible to have a non-cuda package with openmpi-mpifort because non-cuda implies gfortran=10 and openmpi-mpifort implies gfortran=7.

@leofang
Copy link
Member

leofang commented Mar 1, 2022

Thanks, @carterbox. It kinda makes sense to me. Do you know if an MPI compiler wrapper has to pin the host compiler version? Can we add c/c++/fortran compilers to ignore_run_exports_from? cc: @dalcinl

@leofang
Copy link
Member

leofang commented Mar 1, 2022

Otherwise, we'd have to keep rebuilding for every compiler pinning upgrade, which would be annoying...

also cc @conda-forge/core

@dalcinl
Copy link
Contributor

dalcinl commented Mar 1, 2022

IMHO, the MPI compiler wrappers should require a host compiler version greater or equal to the one used to build MPI. Does it make sense? Is it doable? I'm not sure how to implement that in the recipe.

@leofang
Copy link
Member

leofang commented Mar 7, 2022

Sorry for late reply. Have been unable to access my computer since last week.

IMHO, the MPI compiler wrappers should require a host compiler version greater or equal to the one used to build MPI.

I think it's doable with a little hack. The challenge is that the compiler instruction

run:
  - {{ compiler('c') }}

cannot have a version constraint (ex: {{ compiler('c') }} >=9 is not supposed to work).

However, we can first add all compiler instructions (c, cxx, fortran) to ignore_run_export_from (which is what we do to ignore nvcc), and then, for example for openmpi-mpicc, add this to the run section:

run:
  # I hard-coded the lower bounds here, which should be inferred from the compiler version at build time
  - gcc_linux-64 >=9       # [linux64]
  - gcc_linux-aarch64 >=9  # [aarch64]
  - gcc_linux-ppc64le >=9  # [ppc64le]
  - clang_osx-64 >=11      # [osx64 and not arm64]
  - clang_osx-arm64 >=11   # [osx64 and arm64]

and the generated wrapper will have the correct constraint, ex: for linux64 we will get gcc_linux-64 >=9 instead of 9.* as seen currently:
截圖 2022-03-07 下午2 55 37

The same applies to all other compiler wrappers (mpicxx, mpifortran). @carterbox would you like to give it a shot?

@isuruf
Copy link
Member

isuruf commented Mar 7, 2022

run:
  - {{ c_compiler }} >={{ c_compiler_version }}

@leofang
Copy link
Member

leofang commented Mar 7, 2022

(The lower bounds should be fetch-able using c_compiler_version, cxx_compiler_version and fortran_compiler_version, but I am not sure if we need a Jinja expression or not.)

@beckermr
Copy link
Member

beckermr commented Mar 7, 2022

Always use the jinja2 as above.

@leofang
Copy link
Member

leofang commented Mar 7, 2022

That's neat!

@carterbox
Copy link
Member Author

I realized that another solution may be

to never do this:

build:
  - openmpi-mpifort
  - {{ compiler('fortran') }}

but do this instead:

build:
  - openmpi-mpifort

I'm not sure if that was the original intention of these compiler meta-packages or not.

@leofang
Copy link
Member

leofang commented Mar 7, 2022

No I think the former is preferred. A project could have several shared libraries that do not need to be complied by an MPI compiler wrapper, and it's sufficient to cover those cases with {{ compiler(...) }}.

Now that I take a second look at it, we probably should have added

build:
  - {{ mpi }}-mpicc

to mpi4py-feedstock...😂 @dalcinl

@dalcinl
Copy link
Contributor

dalcinl commented Mar 8, 2022

IIRC, the original intention of the {{mpi}}-mpicc metapackages was to provide a quick way to bring in a compiler within a user's environment to quickly build other MPI-based codes (perhaps via pip, or perhaps with the usual configure & make stanza). The {{mpi}}-mpicc metapackages should NOT be used as dependencies in recipes. IIRC, all of the other recipes I deal with (mpi4py, petsc[4py], and slepc[4py]) do not use these metapackages as dependencies.

In short, the {{mpi}}-mpicc should be used by users in quick need of a compiler to install stuff manually within an environment. They should NOT be used by conda recipe authors.

@carterbox
Copy link
Member Author

Then how do recipe authors ensure that a compatible compiler is installed with the openmpi? This recipe currently doesn't offer any run constraints for the compilers.

@dalcinl
Copy link
Contributor

dalcinl commented Mar 9, 2022

@carterbox I just stated what was the original intention of these metapackages. Since then, things have changed a lot in conda-forge, so much to the point that I no longer feel qualified to provide advice about the current state of affairs.

The root issue here is indeed related to dependency management. You are right, this recipe does not have any run constraint on compilers. In principle, those constraints should not be strict, as long as you use a compiler version newer than the one used to build the package, and the compiler truly commits to backward compatibility. But I do not know how (or whether it is possible) to specify any kind of constrains on compilers (or just their runtimes) within a recipe.

So it looks like until things gets better, the only viable solution to workaround this mess is to rebuild packages for each compiler version as they pop up, which is exactly what you are doing in this PR proposing in this issue and I never objected.

@dalcinl
Copy link
Contributor

dalcinl commented Mar 9, 2022

However, what happens if folks simple do not use {{mpi}}-mpicc/fort in their recipes? GCC has a high level of backward compatibility, and unless gfortran had recently changed the file format for modules, I guess you can use let say compile new code gcc-11 and use an openmpi package that was built with let say gcc-9 or gcc-10. I may be wrong, but prima facie, I would expect it to work.

@carterbox carterbox changed the title Rebuild for gfortran_linux-64=10 ? {{mpi}}-mpicc not for use in recipes ? Mar 10, 2022
@jakirkham
Copy link
Member

Recipe authors should be following these docs. If those somehow are missing things, we should update them.

@carterbox
Copy link
Member Author

So maybe I should add something like:

MPI Compilers
-------------

There is no conda-forge equivalent of mpicc/mpif90. Use compiler directives for
coresponding compilers as normal. Do not use the `[openmpi,mpich]-mpicc` metapackages
in the `requirements/build` section of a recipe.

@dalcinl
Copy link
Contributor

dalcinl commented Mar 15, 2022

There is no conda-forge equivalent of mpicc/mpif90.

What do you mean? The mpich and openmpi packages do install mpicc/mpicxx/mpifort. I find that first sentence confusing.

@carterbox
Copy link
Member Author

MPI Compilers
-------------

Do not use the `[openmpi,mpich]-mpicc` metapackages in the `requirements/build` section
of a recipe; the mpi compiler wrappers are included in the main openmpi/mpich packages.
Add openmpi/mpich to the `requirements/host` section and use compiler directives for the 
corresponding compilers in `requirements/build` as normal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
6 participants