-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update conda
recipes for Enhanced Compatibility effort
#893
Conversation
rerun tests |
1 similar comment
rerun tests |
Depends on #901. Keeping as a draft until that is complete. We'll probably see lots of failures until that PR is merged. |
fa82573
to
c6498fa
Compare
c6498fa
to
b95ccae
Compare
`conda` seems to only build multiple variants if the variant key is explicity used in the recipe. (i.e. not via Jinja)
rerun tests |
After reviewing the CI logs, it seems that these changes are working correctly. |
@gpucibot merge |
@@ -33,7 +34,7 @@ requirements: | |||
- python | |||
- numba >=0.49 | |||
- numpy | |||
- {{ pin_compatible('cudatoolkit', max_pin='x.x') }} | |||
- {{ pin_compatible('cudatoolkit', max_pin='x', min_pin='x') }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In conda-forge we decided that the min_pin
should be x.x
. Please see comment ( conda-forge/nvcc-feedstock#71 (comment) ) and subsequent discussion for context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. I'm confused how this would work? If we set min_pin
to x.x
, then wouldn't our pins effectively become cudatoolkit>=11.5,<12
? How would this work for users that use cudatoolkit=11.{0,2,4}
?
This is all assuming that we're only going to build packages with CUDA 11.5
, which is what I thought the plan was, no?
After merging rapidsai#893, there are now 2 `librmm` `conda` packages being published instead of 1. Therefore the upload script needs to be updated accordingly. Skipping CI since the upload script isn't run on PRs anyway.
After merging #893, there are now 2 `librmm` `conda` packages being published instead of 1. Therefore the upload script needs to be updated accordingly. Skipping CI since the upload script isn't run on PRs anyway. Authors: - AJ Schmidt (https://github.com/ajschmidt8) Approvers: - Jordan Jacobelli (https://github.com/Ethyling) URL: #909
This PR tweaks the changes from #893 and #909 so that `rmm` produces two packages (`has_cma` and `no_cma`) instead of `librmm`. Authors: - AJ Schmidt (https://github.com/ajschmidt8) Approvers: - Jordan Jacobelli (https://github.com/Ethyling) URL: #910
This PR updates the
conda
recipe build strings andcudatoolkit
version specifications as part of the Enhanced Compatibility efforts.rmm
ChangesThe build strings in the
conda
recipe have been updated to only include the major CUDA version (i.e.librmm-21.12.00a-cuda11_gc781527_12.tar.bz2
) and thecudatoolkit
version specifications will now be formatted likecudatoolkit >=x,<y.0a0
(i.e.cudatoolkit >=11,<12.0a0
).Moving forward, we'll build the packages with a single CUDA version (i.e.
11.5
) and test them in environments with different CUDA versions (i.e.11.0
,11.2
,11.4
, etc.).librmm
ChangesA
conda_build_config.yaml
file has been added to thelibrmm
recipe folder so that two variants oflibrmm
are built: one with and one withoutcudaMallocAsync
support. A new environment variable,BUILD_FLAGS
, is passed through toconda/recipes/librmm/build.sh
and is set according to thecudaMallocAsync
variant value in the recipe. Finally, a build string modifier of eitherno_cma
orhas_cma
is appended to the build string which is used to determine which package should be installed inci/gpu/build.sh
.