Skip to content

Conversation

@ruisearch42
Copy link
Collaborator

@ruisearch42 ruisearch42 commented Aug 28, 2025

Purpose

Currently we only strictly pack dp_size_local for the master node. However, in case of DeepEP it assumes EP ranks [0, 7] are on the same node, (same for [8, 15], etc.) and uses cuda IPC for communication among them. If this is not satisfied, a runtime error is raised because cuda IPC does not work cross-node. This PR fixes the issue by restricting the placement.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the data parallel placement group creation logic in Ray to ensure that dp_size_local ranks are strictly packed onto the same node. This is an important change for use cases like DeepEP. While the overall direction is correct, I've identified a critical bug in the implementation that prevents scheduling on any node other than the master node, which would break multi-node data parallelism. My review includes a suggested fix for this issue.

@ruisearch42 ruisearch42 changed the title [DP][ray] Strictly pack dp_size_local ranks to the same node [DP][ray] Support different VLLM_RAY_DP_PACK_STRATEGY Sep 4, 2025
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
@youkaichao
Copy link
Member

cc @njhill

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
vllm/envs.py Outdated
# - "strict":
# allocate exactly data-parallel-size-local DP ranks to each picked node;
# This environment variable is ignored if data-parallel-backend is not Ray.
"VLLM_RAY_DP_PACK_STRATEGY": lambda: os.getenv("VLLM_RAY_DP_PACK_STRATEGY", "fill"),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't the default be strict?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Copy link
Collaborator

@kouroshHakha kouroshHakha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs one change

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
@ruisearch42 ruisearch42 added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 9, 2025
@kouroshHakha
Copy link
Collaborator

LGTM

Copy link
Member

@njhill njhill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
@ruisearch42 ruisearch42 enabled auto-merge (squash) October 9, 2025 23:14
@vllm-bot vllm-bot merged commit 757fa4a into vllm-project:main Oct 10, 2025
44 of 47 checks passed
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
…3849)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Dhruvilbhatt pushed a commit to Dhruvilbhatt/vllm that referenced this pull request Oct 14, 2025
…3849)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: Dhruvil Bhatt <bhattdbh@amazon.com>
bbartels pushed a commit to bbartels/vllm that referenced this pull request Oct 16, 2025
…3849)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: bbartels <benjamin@bartels.dev>
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
alhridoy pushed a commit to alhridoy/vllm that referenced this pull request Oct 24, 2025
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
…3849)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
0xrushi pushed a commit to 0xrushi/vllm that referenced this pull request Oct 26, 2025
…3849)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
0xrushi pushed a commit to 0xrushi/vllm that referenced this pull request Oct 26, 2025
…3849)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
rtourgeman pushed a commit to rtourgeman/vllm that referenced this pull request Nov 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants