-
Notifications
You must be signed in to change notification settings - Fork 56
Build and test with CUDA 13.0.0 #782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
fd8c975
d793df8
535401c
dd82e21
f003cb3
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,14 +1,14 @@ | ||
| # Copyright (c) 2023-2025, NVIDIA CORPORATION. | ||
| # CUDA_VER is `<major>.<minor>` (e.g. `12.0`) | ||
| # CUDA_VER is `<major>.<minor>` (e.g. `13.0`) | ||
|
|
||
| pull-request: | ||
| - { CUDA_VER: '12.0', ARCH: 'amd64', PYTHON_VER: '3.10', GPU: 'l4', DRIVER: 'earliest' } | ||
| - { CUDA_VER: '12.9', ARCH: 'arm64', PYTHON_VER: '3.11', GPU: 'a100', DRIVER: 'latest' } | ||
| - { CUDA_VER: '12.9', ARCH: 'amd64', PYTHON_VER: '3.13', GPU: 'l4', DRIVER: 'latest' } | ||
| - { CUDA_VER: '12.9', ARCH: 'amd64', PYTHON_VER: '3.12', GPU: 'l4', DRIVER: 'latest' } | ||
| - { CUDA_VER: '13.0', ARCH: 'arm64', PYTHON_VER: '3.13', GPU: 'a100', DRIVER: 'latest' } | ||
| - { CUDA_VER: '13.0', ARCH: 'amd64', PYTHON_VER: '3.13', GPU: 'h100', DRIVER: 'latest' } | ||
| branch: | ||
| - { CUDA_VER: '12.0', ARCH: 'amd64', PYTHON_VER: '3.10', GPU: 'l4', DRIVER: 'earliest' } | ||
| - { CUDA_VER: '12.0', ARCH: 'amd64', PYTHON_VER: '3.10', GPU: 'l4', DRIVER: 'latest' } | ||
| - { CUDA_VER: '12.0', ARCH: 'arm64', PYTHON_VER: '3.11', GPU: 'a100', DRIVER: 'latest' } | ||
| - { CUDA_VER: '12.0', ARCH: 'amd64', PYTHON_VER: '3.12', GPU: 'l4', DRIVER: 'latest' } | ||
| - { CUDA_VER: '12.9', ARCH: 'amd64', PYTHON_VER: '3.13', GPU: 'l4', DRIVER: 'latest' } | ||
| - { CUDA_VER: '12.0', ARCH: 'arm64', PYTHON_VER: '3.11', GPU: 'a100', DRIVER: 'earliest' } | ||
| - { CUDA_VER: '12.9', ARCH: 'amd64', PYTHON_VER: '3.11', GPU: 'l4', DRIVER: 'latest' } | ||
| - { CUDA_VER: '12.9', ARCH: 'arm64', PYTHON_VER: '3.13', GPU: 'a100', DRIVER: 'latest' } | ||
| - { CUDA_VER: '13.0', ARCH: 'amd64', PYTHON_VER: '3.11', GPU: 'l4', DRIVER: 'latest' } | ||
| - { CUDA_VER: '13.0', ARCH: 'arm64', PYTHON_VER: '3.12', GPU: 'a100', DRIVER: 'latest' } | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Following on from our shared-workflows discussion, should we run at least one of these jobs on an h100? This is a low traffic repo so it shouldn't add too much load and it seems like it would be a good test.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Great point, I agree! I just pushed f003cb3 switching one of these PR jobs to H100s |
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should have predicted this... there are some
cumlnotebooks that expect to be able to train anxgboostmodel using GPUs.Depending on the CPU-only version thanks to rapidsai/integration#795 leads to this:
(build link)
This proposes just skipping
cumlnotebook testing here temporarily, to unblock publishing the first nightly container images with CUDA 13 packages.If reviewers agree, I'll add an issue in this repo tracking the work of putting that testing back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As long as we have the issue up I'm fine with this temporary patch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thank you. Put up an issue here: #784