Skip to content

Conversation

@simon-mo
Copy link
Collaborator

@simon-mo simon-mo commented Jul 29, 2025

Signed-off-by: simon-mo <simon.mo@hey.com>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the ci/build label Jul 29, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new CI test step for the Blackwell GPU architecture. The intention is good, but the implementation has a couple of areas for improvement. My review focuses on making the new test step clearer, more robust, and more effective at validating the target hardware.

- vllm/
commands:
- nvidia-smi
- python3 examples/offline_inference/basic/chat.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This test command has two significant issues:

  1. Lack of Specificity: The test is labeled "Blackwell Test" but runs a generic example without any Blackwell-specific configurations. To be a meaningful test for the new architecture, it should validate features unique to Blackwell, such as FP8 data types. Please consider adding arguments to enable and test such features (e.g., --dtype float8_e4m3fn).
  2. Implicit Dependency: The command relies on the default model (meta-llama/Llama-3.2-1B-Instruct) hardcoded in examples/offline_inference/basic/chat.py. This makes the CI step fragile. To make the test more robust and self-contained, the model should be specified explicitly as a command-line argument.
    - python3 examples/offline_inference/basic/chat.py --model meta-llama/Llama-3.2-1B-Instruct --dtype float8_e4m3fn # Or other Blackwell-specific args


- label: Blackwell Test
working_dir: "/vllm-workspace/"
# optional: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The optional: true setting is commented out, which creates ambiguity about the intended behavior of this test step. If this step should not block the pipeline on failure, this line should be uncommented. If the step is mandatory, this commented-out line should be removed to improve clarity and prevent future confusion.

@simon-mo simon-mo changed the title [ci] add b200 test [ci] add b200 test placeholder Jul 29, 2025
@simon-mo simon-mo merged commit 0d0cc9e into vllm-project:main Jul 30, 2025
33 of 94 checks passed
liuyumoye pushed a commit to liuyumoye/vllm that referenced this pull request Jul 31, 2025
Signed-off-by: simon-mo <simon.mo@hey.com>
vadiklyutiy pushed a commit to CentML/vllm that referenced this pull request Aug 5, 2025
Signed-off-by: simon-mo <simon.mo@hey.com>
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: x22x22 <wadeking@qq.com>
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
Signed-off-by: simon-mo <simon.mo@hey.com>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
noamgat pushed a commit to noamgat/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Noam Gat <noamgat@gmail.com>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Paul Pak <paulpak58@gmail.com>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Diego-Castan <diego.castan@ibm.com>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: simon-mo <simon.mo@hey.com>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: simon-mo <simon.mo@hey.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant