Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use gcc 9 for CUDA 11 #1053

Closed
wants to merge 1 commit into from
Closed

Conversation

leofang
Copy link
Member

@leofang leofang commented Dec 30, 2020

Checklist

  • Used a personal fork of the feedstock to propose changes
  • Bumped the build number (if the version is unchanged)
  • Reset the build number to 0 (if the version changed)
  • Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
  • Ensured the license file is being packaged.

Close #1000.

@leofang leofang requested a review from a team as a code owner December 30, 2020 04:43
@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

@leofang
Copy link
Member Author

leofang commented Dec 30, 2020

Not sure if this achieves what I think...

Copy link
Member

@isuruf isuruf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tried a rerendering of a feedstock with this? Copy this into cupy-feedstock, remove the timestamp and rerender. You'll find that it wouldn't work.

@leofang
Copy link
Member Author

leofang commented Dec 30, 2020

Thanks, @isuruf. I was following @h-vetinari's suggestion (#1000 (comment)). So what's the proper way to achieve the desired effect? I suppose you're saying zipping either in the global pinning (#1051 (comment)) or in the migrator (as done here) just wouldn't work, so I am lost...

@isuruf
Copy link
Member

isuruf commented Dec 30, 2020

I'll let you know if I find one. 😄

@leofang leofang marked this pull request as draft December 30, 2020 05:32
@h-vetinari
Copy link
Member

So, there's a bunch of problems here I think:

  • updating the zip_keys in the pinning feedstock while a migrator that's updating some of the keys is in progress will probably break it (but then, I have close to zero experience with migrators)
  • the more keys in a zip, the more fragile it becomes, because all the keys need to match in length under all circumstances (i.e. arches / OSes / etc.)
  • cxx_compiler_version needs to be in there as well
  • Doesn't look like everything will always have the same length (does the CI here check this)?

I think that it should be feasible after the migrator has run through to expand the zip_keys in the pinning feedstock... WDYT, @isuruf?

@kkraus14
Copy link
Contributor

My understanding is that expanding cxx_compiler_version will also affect non-CUDA feedstocks in that they'll build for each version in which the selector matches.

@beckermr
Copy link
Member

@isuruf Does the way we handle keys zipped with docker_image in the latest smithy help here?

@isuruf
Copy link
Member

isuruf commented Jan 8, 2021

@isuruf Does the way we handle keys zipped with docker_image in the latest smithy help here?

Nope

@h-vetinari
Copy link
Member

h-vetinari commented Feb 17, 2021

So #1222 / #1223 should make a dent here, but unfortunately still won't allow us to universally go to gcc 9+, because 10.2 is only compatible with gcc 8, see here.

@jakirkham
Copy link
Member

What if we added cxx_compiler_version and fortan_compiler_version to the zip_keys?

@isuruf
Copy link
Member

isuruf commented Feb 19, 2021

@jakirkham, please read @kkraus14's comment above

@jakirkham
Copy link
Member

Right we can include one for the cuda_compiler_version with None

@isuruf
Copy link
Member

isuruf commented Feb 19, 2021

I don't know whether you understood @kkraus14's comment or not. So, I'll say it again. Expanding cxx_compiler_version will produce matrix of builds for non-CUDA feedstocks which we do not want.

@jakirkham
Copy link
Member

I don't think that is necessarily the case.

@isuruf
Copy link
Member

isuruf commented Feb 19, 2021

Well, I've tried it multiple times before and I can tell you that it will be. Have you tried even once?

@jakirkham jakirkham mentioned this pull request Jul 19, 2021
5 tasks
@leofang
Copy link
Member Author

leofang commented Jan 26, 2022

I think this PR is moot now that we have a proper mechanism to zip the C/C++ compiler version with the nvcc version 🙂

@leofang leofang closed this Jan 26, 2022
@leofang leofang deleted the zip_cuda_c_cxx_ver branch January 26, 2022 04:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

cxx_compiler_version incompatible with nvcc
7 participants