-
-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gcc 9.3.0 migration #1160
Comments
cc @raydouglass @kkraus14 @mike-wendt (for vis) |
Thanks @jakirkham. We may need to handle pinning the compiler in the nvcc feedstock to prevent gcc-9 from being pulled when nvcc 10.2 or lower is used. |
So I saw PR ( conda-forge/openmpi-feedstock#67 ) recently, which is moving to GCC 9 for a package that also builds CUDA components. Admittedly OpenMPI's usage of CUDA is pretty minimal, but AIUI we probably shouldn't be doing that. Can you please confirm Keith? If not, maybe we should discuss how we can exclude these packages from the GCC 9 migration to avoid creating packages that are broken. Not sure how we do that, but want to raise the concern. |
We are building both gcc7 and gcc9 on linux-64. So we can mark the gcc9 ones broken and put in proper skips in the recipe. We could also zip the cuda keys with the other compiler keys, but this will be more complicated. |
The CUDA compiler, Note that if a package is only using the CUDA host APIs but not actually compiling any CUDA code then using gcc 9.x should be safe regardless. |
Thanks @kkraus14! I think given this we should put the constraint directly in openmpi if it is needed. Our cuda universe is not so big right now that we can't handle this more selectively. |
I made an issue here: conda-forge/openmpi-feedstock#68 - We can discuss more there. |
I think this is an argument for us to split the |
Maybe this could be handled as part of a |
The headers, compiler, and driver library are not allowed to be redistributed per the CUDA EULA. Right now we actually don't symlink the headers in any way, we just symlink the compiler and the CUDA driver library and the compiler handles finding the necessary headers. For libraries like |
Right not saying we package the headers or the driver library. Just saying we set the appropriate environment variables as we do currently. Maybe with an activation script? |
So when an incompatible GCC and CUDA Version are used, would there be compiler errors or is it possible for it to silently succeed (even if what it produced is problematic)? |
FWIW here's a use case where we do need to match GNU compilers to CUDA compilers ( conda-forge/tomopy-feedstock#39 ). Tried proposing a way to combine them correctly ( conda-forge/tomopy-feedstock#39 (comment) ), but would not at all be surprised if that is wrong and needs changes. Though hopefully that gets us started thinking about how we should address this use case. |
This one is done! |
to do
The text was updated successfully, but these errors were encountered: