Skip to content

Conversation

@mgoin
Copy link
Member

@mgoin mgoin commented Jan 15, 2025

I think it is time to match pytorch's default cuda version. We should still keep wheels built with 12.1 around.

Signed-off-by: mgoin <michael@neuralmagic.com>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation ci/build labels Jan 15, 2025
Copy link
Member

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree! there are several pieces we pick up going from 12.1 -> 12.4 (Lovelace fp8 kernels, 2:4 sparse kernels, some cuda graph stuff)

@tlrmchlsmth
Copy link
Member

We should still keep wheels built with 12.1 around.

+1 to this as well

commands:
- "aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/q9t5s3a7"
- "DOCKER_BUILDKIT=1 docker build --build-arg max_jobs=16 --build-arg USE_SCCACHE=1 --build-arg GIT_REPO_CHECK=1 --build-arg CUDA_VERSION=12.1.0 --tag public.ecr.aws/q9t5s3a7/vllm-release-repo:$BUILDKITE_COMMIT --target vllm-openai --progress plain ."
- "DOCKER_BUILDKIT=1 docker build --build-arg max_jobs=16 --build-arg USE_SCCACHE=1 --build-arg GIT_REPO_CHECK=1 --build-arg CUDA_VERSION=12.4.0 --tag public.ecr.aws/q9t5s3a7/vllm-release-repo:$BUILDKITE_COMMIT --target vllm-openai --progress plain ."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

both "Build wheel - CUDA 12.4" and "Build wheel - CUDA 12.1" build with cuda 12.4?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^ this should stay 12.1.0

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this the release image? I have not changed the "Build wheel - CUDA 12.1" case. Why shouldn't this also be 12.4?

Copy link
Member

@youkaichao youkaichao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please note that, in .buildkite/upload-wheels.sh , we do not upload cuda 11.8 wheels, but will upload cuda 12 wheels.

if you build for both 12.1 and 12.4, make sure only one version is uploaded to avoid upload race condition.

Copy link
Collaborator

@khluu khluu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Triggered a test run here: https://buildkite.com/vllm/release/builds/2643

and maybe it's time we stop building cu118 wheels?

commands:
- "aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/q9t5s3a7"
- "DOCKER_BUILDKIT=1 docker build --build-arg max_jobs=16 --build-arg USE_SCCACHE=1 --build-arg GIT_REPO_CHECK=1 --build-arg CUDA_VERSION=12.1.0 --tag public.ecr.aws/q9t5s3a7/vllm-release-repo:$BUILDKITE_COMMIT --target vllm-openai --progress plain ."
- "DOCKER_BUILDKIT=1 docker build --build-arg max_jobs=16 --build-arg USE_SCCACHE=1 --build-arg GIT_REPO_CHECK=1 --build-arg CUDA_VERSION=12.4.0 --tag public.ecr.aws/q9t5s3a7/vllm-release-repo:$BUILDKITE_COMMIT --target vllm-openai --progress plain ."
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^ this should stay 12.1.0

@mgoin
Copy link
Member Author

mgoin commented Jan 17, 2025

if you build for both 12.1 and 12.4, make sure only one version is uploaded to avoid upload race condition.

I thought that wheels built that don't match the MAIN_CUDA_VERSION in setup.py will have a +cuXXX added to their version/wheel name

vllm/setup.py

Line 49 in 54cacf0

MAIN_CUDA_VERSION = "12.1"

So I think would prevent wheel collision. It would be nice to have explicit wheel versions like this to make it more transparent what is available

@youkaichao
Copy link
Member

@mgoin the upload wheel only checks if cu118 is present 🤕

if [[ $normal_wheel == *"cu118"* ]]; then

Copy link
Collaborator

Looks like PyTorch has 11.8, 12.1, and 12.4. So to confirm, we will publish 12.4 to PyPI, still build for 12.1 and 11.8 for nightly, then push them to artifacts?

@mgoin
Copy link
Member Author

mgoin commented Jan 30, 2025

@khluu could you help give this a test again?

@khluu khluu added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 30, 2025
mgoin and others added 2 commits January 31, 2025 18:59
Signed-off-by: mgoin <michael@neuralmagic.com>
@simon-mo
Copy link
Collaborator

@mgoin
Copy link
Member Author

mgoin commented Feb 26, 2025

Validated with cutlass sparsity support, since we need 12.2 CUDA at least on Hopper

bool cutlass_sparse_scaled_mm_supported(int64_t cuda_device_capability) {
// sparse CUTLASS kernels need at least
// CUDA 12.2 and SM90 (Hopper)

uv pip install -U vllm --extra-index-url https://wheels.vllm.ai/a6022b380c82ea785233baaf37f10d3a0a55d009
python -c "from vllm.model_executor.layers.quantization.utils.w8a8_utils import sparse_cutlass_supported; print(sparse_cutlass_supported())"
True

uv pip install -U vllm==0.7.3
python -c "from vllm.model_executor.layers.quantization.utils.w8a8_utils import sparse_cutlass_supported; print(sparse_cutlass_supported())"
False

@simon-mo simon-mo merged commit ca377cf into vllm-project:main Feb 27, 2025
67 of 70 checks passed
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants