Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core][Optimization] remove vllm-nccl #5091

Merged
merged 7 commits into from
May 29, 2024

Conversation

youkaichao
Copy link
Member

After months of investigation, I finally find the root cause is that NCCL 2.19 will turn on virtual memory by default, which costs memory during cudagraph capture.

Per the documentation:

NCCL_CUMEM_ENABLE
(since 2.18)

Use CUDA cuMem* functions to allocate memory in NCCL.

Values accepted
0 or 1. Default is 0 in 2.18 (disabled); since 2.19 this feature is auto-enabled by default if the system supports it (NCCL_CUMEM_ENABLE can still be used to override the autodetection).

The transition happens right in nccl 2.19, where this feature becomes on by default. And vLLM, very unfortunately, hits exactly the problem when we upgrade to pytorch 2.2.

The solution is simple, just adding a new environment variable, and it's fine. We can use the nccl library brought by pytorch. And we don't need to ship vllm-nccl anymore.

@youkaichao youkaichao enabled auto-merge (squash) May 29, 2024 02:28
@youkaichao youkaichao merged commit 5bd3c65 into vllm-project:main May 29, 2024
64 checks passed
@youkaichao youkaichao deleted the unpin_vllm_nccl branch May 29, 2024 05:14
blinkbear pushed a commit to blinkbear/vllm that referenced this pull request May 29, 2024
dtrifiro pushed a commit to opendatahub-io/vllm that referenced this pull request May 31, 2024
@hmellor hmellor mentioned this pull request May 31, 2024
robertgshaw2-neuralmagic pushed a commit to neuralmagic/nm-vllm that referenced this pull request Jun 8, 2024
dtrifiro added a commit to dtrifiro/vllm that referenced this pull request Jun 13, 2024
joerunde pushed a commit to joerunde/vllm that referenced this pull request Jun 17, 2024
dtrifiro added a commit to opendatahub-io/vllm that referenced this pull request Jun 18, 2024
dtrifiro added a commit to opendatahub-io/vllm that referenced this pull request Jun 21, 2024
prashantgupta24 pushed a commit to opendatahub-io/vllm that referenced this pull request Jul 1, 2024
robertgshaw2-neuralmagic pushed a commit to neuralmagic/nm-vllm that referenced this pull request Jul 14, 2024
dtrifiro added a commit to opendatahub-io/vllm that referenced this pull request Jul 17, 2024
nathan-weinberg pushed a commit to nathan-weinberg/vllm that referenced this pull request Jul 18, 2024
dtrifiro added a commit to opendatahub-io/vllm that referenced this pull request Jul 23, 2024
dtrifiro added a commit to opendatahub-io/vllm that referenced this pull request Aug 6, 2024
Temirulan pushed a commit to Temirulan/vllm-whisper that referenced this pull request Sep 6, 2024
dtrifiro added a commit to dtrifiro/vllm that referenced this pull request Sep 13, 2024
dtrifiro added a commit to opendatahub-io/vllm that referenced this pull request Sep 13, 2024
dtrifiro added a commit to opendatahub-io/vllm that referenced this pull request Sep 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants