Skip to content

Conversation

@nnshah1
Copy link
Contributor

@nnshah1 nnshah1 commented Aug 18, 2025

Overview:

Pins the precompile version for vllm to the same as used for source installation.

Details:

Where should the reviewer start?

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

Summary by CodeRabbit

  • Chores
    • Simplified the AMD64 installation flow by removing conditional branches and using direct install commands for both editable and standard setups.
    • Reduced complexity in dependency installation, leading to more predictable and consistent installs on AMD64.
    • No changes to the ARM64 installation path.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Aug 18, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 18, 2025

Walkthrough

The AMD64 branch of container/deps/vllm/install_vllm.sh removes the VLLM_USE_PRECOMPILED flag logic, adds VLLM_PRECOMPILED_WHEEL_LOCATION (unused in shown commands), and simplifies installation to direct uv pip install calls for editable and non-editable modes. The ARM64 path is unchanged.

Changes

Cohort / File(s) Summary
vLLM installer (AMD64 path)
container/deps/vllm/install_vllm.sh
Removed VLLM_USE_PRECOMPILED logic; added VLLM_PRECOMPILED_WHEEL_LOCATION derived from VLLM_REF; replaced conditional install with direct uv pip install -e . --torch-backend=$TORCH_BACKEND (editable) and uv pip install . --torch-backend=$TORCH_BACKEND; no changes for ARM64 branch.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant Script as install_vllm.sh (AMD64)
  participant Pip as uv pip

  Note over Script: Previous flow
  User->>Script: Run installer
  Script->>Script: Check VLLM_USE_PRECOMPILED
  alt Precompiled enabled
    Script->>Pip: Install using precompiled wheel
  else
    Script->>Pip: Install from source with torch-backend
  end

  Note over Script: New flow
  User->>Script: Run installer
  Script->>Script: Set VLLM_PRECOMPILED_WHEEL_LOCATION (unused)
  Script->>Pip: Install (editable or non-editable) with torch-backend
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Poem

A rabbit hops through shells and wheels,
Snips a flag, streamlines the deals.
AMD64, a cleaner trail,
uv pip sails without a veil.
ARM looks on, unchanged and chill—
Thump, thump, commit, with tidy will. 🐇✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
container/deps/vllm/install_vllm.sh (2)

153-157: Use --torch-backend only for source builds; it’s irrelevant for wheel installs

The flag is parsed by vLLM’s build system (per prior learnings) and has no effect when installing a prebuilt wheel. The proposed change above scopes the flag correctly.


82-93: Help text defaults drifted from actual defaults — sync to avoid confusion

Several help strings don’t match the values set earlier in the script (e.g., VLLM_REF, INSTALLATION_DIR, DEEPGEMM_REF). Recommend referencing the variables directly to keep them in sync.

Apply this diff to align help text with the actual defaults:

@@
-            echo "  --vllm-ref REF    Git reference to checkout (default: f4135232b9a8c4845f8961fb1cd17581c56ae2ce)"
+            echo "  --vllm-ref REF    Git reference to checkout (default: ${VLLM_REF})"
@@
-            echo "  --installation-dir DIR  Directory to install vllm (default: /tmp/vllm)"
+            echo "  --installation-dir DIR  Directory to install vllm (default: ${INSTALLATION_DIR})"
@@
-            echo "  --deepgemm-ref REF  Git reference for DeepGEMM (default: 1876566)"
+            echo "  --deepgemm-ref REF  Git reference for DeepGEMM (default: ${DEEPGEMM_REF})"
@@
-            echo "  --torch-backend BACKEND  Torch backend to use (default: cu128)"
+            echo "  --torch-backend BACKEND  Torch backend to use (default: ${TORCH_BACKEND})"
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d177cdf and 92bb00b.

📒 Files selected for processing (1)
  • container/deps/vllm/install_vllm.sh (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: zaristei
PR: ai-dynamo/dynamo#2020
File: container/deps/vllm/install_vllm.sh:115-118
Timestamp: 2025-07-21T00:10:56.947Z
Learning: Graceful fallback for PyTorch wheel installation is broken on ARM architecture, so immediate exit on pinned version failure is preferred over fallback mechanisms in container/deps/vllm/install_vllm.sh for ARM64.
Learnt from: ptarasiewiczNV
PR: ai-dynamo/dynamo#2027
File: container/deps/vllm/install_vllm.sh:0-0
Timestamp: 2025-07-22T10:22:28.972Z
Learning: The `--torch-backend=auto` flag works with vLLM installations via uv pip install, even though it's not a standard pip option. This flag is processed by vLLM's build system during installation to automatically match PyTorch distribution with container CUDA versions.
📚 Learning: 2025-07-21T00:10:56.947Z
Learnt from: zaristei
PR: ai-dynamo/dynamo#2020
File: container/deps/vllm/install_vllm.sh:115-118
Timestamp: 2025-07-21T00:10:56.947Z
Learning: Graceful fallback for PyTorch wheel installation is broken on ARM architecture, so immediate exit on pinned version failure is preferred over fallback mechanisms in container/deps/vllm/install_vllm.sh for ARM64.

Applied to files:

  • container/deps/vllm/install_vllm.sh
📚 Learning: 2025-07-22T10:22:28.972Z
Learnt from: ptarasiewiczNV
PR: ai-dynamo/dynamo#2027
File: container/deps/vllm/install_vllm.sh:0-0
Timestamp: 2025-07-22T10:22:28.972Z
Learning: The `--torch-backend=auto` flag works with vLLM installations via uv pip install, even though it's not a standard pip option. This flag is processed by vLLM's build system during installation to automatically match PyTorch distribution with container CUDA versions.

Applied to files:

  • container/deps/vllm/install_vllm.sh
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and Test - dynamo
🔇 Additional comments (1)
container/deps/vllm/install_vllm.sh (1)

154-156: Using --torch-backend with uv pip install is valid for vLLM builds

Per previous learnings, passing this flag is correctly handled by vLLM’s build system. Keep it for source builds.

@nv-anants
Copy link
Contributor

/ok to test 92bb00b

@alec-flowers
Copy link
Contributor

/ok to test 8882917

@alec-flowers
Copy link
Contributor

/ok to test 8882917

@alec-flowers alec-flowers merged commit 2840071 into main Aug 19, 2025
12 of 13 checks passed
@alec-flowers alec-flowers deleted the nnshah1-pin-vllm-precompiled branch August 19, 2025 01:27
hhzhang16 pushed a commit that referenced this pull request Aug 27, 2025
Signed-off-by: Hannah Zhang <hannahz@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants