Skip to content

Conversation

@zhewenl
Copy link
Collaborator

@zhewenl zhewenl commented Aug 16, 2025

Purpose

Like #16737, DP is not really supported in the throughput benchmarking, adding a checks to raise the issue. (See discussions in #16222)

Test Plan

CUDA_VISIBLE_DEVICES=0,1 VLLM_LOGGING_LEVEL=DEBUG CUDA_LAUNCH_BLOCKING=1 VLLM_TRUST_REMOTE_CODE=1 SAFETENSORS_FAST_GPU=1 TORCH_SHOW_CPP_STACKTRACES=1 CUDA_ENABLE_CORE_DUMP_ON_EXCEPTION=1 \
vllm bench throughput --model /data/users/zhewenli/models/qwen_moe \
--tensor-parallel-size 1 --data-parallel-size 2   --enable-expert-parallel \
--max-model-len 2048 --input-len 1000 --output-len 1000 --num-prompts 50 --trust-remote-code

Test Result

before: test will hang at Processed prompts:

(EngineCore_0 pid=3839314) DEBUG 08-16 14:07:38 [core.py:728] EngineCore waiting for work.
(EngineCore_0 pid=3839314) DEBUG 08-16 14:07:38 [core.py:728] EngineCore waiting for work.
INFO 08-16 14:07:38 [llm.py:298] Supported_tasks: ['generate']
Adding requests:   0%|                                                                                                        | 0/50 [00:00<?, ?it/s](EngineCore_0 pid=3839314) DEBUG 08-16 14:07:38 [core.py:734] EngineCore loop active.
Adding requests: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 285.04it/s]
Processed prompts:   0%|   

after: raise errors

Traceback (most recent call last):
  File "/home/zhewenli/uv_env/vllm/bin/vllm", line 10, in <module>
    sys.exit(main())
             ^^^^^^
  File "/data/users/zhewenli/gitrepos/vllm/vllm/entrypoints/cli/main.py", line 54, in main
    args.dispatch_function(args)
  File "/data/users/zhewenli/gitrepos/vllm/vllm/entrypoints/cli/benchmark/throughput.py", line 21, in cmd
    main(args)
  File "/data/users/zhewenli/gitrepos/vllm/vllm/benchmarks/throughput.py", line 585, in main
    validate_args(args)
  File "/data/users/zhewenli/gitrepos/vllm/vllm/benchmarks/throughput.py", line 441, in validate_args
    raise ValueError(
ValueError: Data parallel is not supported in offline benchmark,             please use benchmark serving instead

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@zhewenl zhewenl requested a review from simon-mo August 16, 2025 22:00
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a check to the throughput benchmark script to prevent the use of data parallelism, which is unsupported and causes the script to hang. The change correctly raises a ValueError when data_parallel_size is greater than 1. My feedback focuses on improving the formatting of the error message string to adhere to PEP 8 style guidelines and produce a cleaner, more readable error for the user.

Comment on lines 429 to 432
raise ValueError(
"Data parallel is not supported in offline benchmark, \
please use benchmark serving instead"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The string continuation using a backslash (\) includes the leading whitespace from the next line in the final string, resulting in a poorly formatted error message. According to PEP 8, the preferred way to wrap long lines is by using Python's implied line continuation inside parentheses. This improves readability and avoids unintended whitespace in the string.1

Suggested change
raise ValueError(
"Data parallel is not supported in offline benchmark, \
please use benchmark serving instead"
)
raise ValueError(
"Data parallel is not supported in offline benchmark, "
"please use benchmark serving instead"
)

Style Guide References

Footnotes

  1. PEP 8 recommends using implied line continuation within parentheses for long lines over using a backslash, especially for strings, to improve readability and prevent issues with extraneous whitespace.

@mergify mergify bot added the performance Performance-related issues label Aug 16, 2025
@simon-mo simon-mo enabled auto-merge (squash) August 20, 2025 01:34
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 20, 2025
@simon-mo simon-mo merged commit f729023 into vllm-project:main Aug 20, 2025
46 checks passed
divakar-amd pushed a commit to divakar-amd/vllm_upstream that referenced this pull request Aug 20, 2025
cyang49 pushed a commit to cyang49/vllm that referenced this pull request Aug 20, 2025
djmmoss pushed a commit to djmmoss/vllm that referenced this pull request Aug 21, 2025
…t#23038)

Co-authored-by: Simon Mo <simon.mo@hey.com>
Signed-off-by: Duncan Moss <djm.moss@gmail.com>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
…t#23038)

Co-authored-by: Simon Mo <simon.mo@hey.com>
Signed-off-by: Xiao Yu <xiao.yu@amd.com>
zhewenl added a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
mengxingkongzhouhan pushed a commit to mengxingkongzhouhan/vllm that referenced this pull request Aug 30, 2025
zhewenl added a commit to zhewenl/vllm that referenced this pull request Sep 3, 2025
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
sducouedic pushed a commit to sducouedic/vllm that referenced this pull request Oct 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants