Skip to content

Conversation

@biswapanda
Copy link
Contributor

@biswapanda biswapanda commented Aug 5, 2025

Overview:

Cherry-pick: #2288 (merged to main)

More reasonable values to avoid customer confusion (https://nvbugspro.nvidia.com/bug/5425651)

closes: https://nvbugspro.nvidia.com/bug/5425651

Summary by CodeRabbit

  • New Features

    • Added comprehensive Kubernetes deployment examples and configuration for TensorRT-LLM backend, including aggregated, disaggregated, and router-based setups.
    • Introduced new utilities for robust and distributed port allocation for vLLM backend.
  • Bug Fixes

    • Corrected and improved documentation links, command examples, and configuration paths across multiple README files.
    • Fixed system package names and clarified installation steps in setup instructions.
  • Documentation

    • Expanded and reorganized deployment and backend documentation, including detailed guides for Kubernetes deployments and backend-specific instructions.
    • Updated support matrices and framework-specific references for improved clarity.
    • Removed outdated or redundant documentation files.
  • Refactor

    • Restructured configuration files for CUDA graph, cache, and MoE settings to use nested and more descriptive keys.
    • Refactored port allocation logic in vLLM backend for better modularity and error handling.
  • Chores

    • Updated dependency versions and pinned compatible package versions for improved stability.
    • Enhanced Dockerfiles to use newer base images, add health check utilities, and update build arguments.
    • Adjusted Helm and operator configuration to use a termination delay parameter instead of a boolean flag for Grove feature management.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Aug 5, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@biswapanda biswapanda changed the base branch from main to release/0.4.0 August 5, 2025 03:52
@biswapanda biswapanda enabled auto-merge (squash) August 5, 2025 03:52
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 5, 2025

Caution

Review failed

Failed to post review comments.

Walkthrough

This update introduces substantial documentation restructuring, new deployment guides, and configuration improvements across multiple backend components. Notably, it adds Kubernetes deployment examples and guides for TRTLLM and vLLM, reorganizes and corrects support matrix and installation instructions, modularizes port allocation logic for the vLLM backend, and migrates engine configuration files to new formats. Several Dockerfiles and build scripts are updated for version consistency and improved health check tooling. The multimodal example documentation is removed, and Grove feature configuration in the operator is refactored to use a termination delay parameter with runtime detection.

Changes

Cohort / File(s) Change Summary
Top-level & Example Documentation
README.md, examples/README.md, docs/examples/README.md, docs/guides/dynamo_deploy/README.md, docs/guides/dynamo_deploy/quickstart.md, docs/guides/dynamo_deploy/operator_deployment.md, docs/components/backends/llm/README.md, examples/basics/multimodal/README.md
Reorganized framework support matrix, clarified backend references, expanded Kubernetes deployment instructions, corrected and removed outdated documentation, deleted multimodal and operator deployment READMEs.
Backend-Specific Documentation
components/backends/vllm/README.md, components/backends/sglang/README.md, components/backends/trtllm/README.md, components/backends/llama_cpp/README.md, components/backends/sglang/docs/*, components/backends/trtllm/deploy/README.md, components/backends/vllm/deploy/README.md
Updated feature matrix links, improved installation steps, added/deleted deployment and benchmarking guides, clarified example usage, and added deployment README files.
Kubernetes Deployment YAMLs
components/backends/trtllm/deploy/*.yaml, components/backends/vllm/deploy/*.yaml, components/backends/sglang/deploy/*.yaml
Added new deployment CRs for TRTLLM, updated health probe settings for all backends, changed worker module paths, and improved readiness/liveness probe responsiveness.
Engine Configuration Migration
components/backends/trtllm/engine_configs/*
Migrated engine config files to use nested CUDA graph and cache transceiver config sections, standardized kv_cache dtype settings, and restructured MoE config keys.
vLLM Port Allocation Refactor
components/backends/vllm/src/dynamo/vllm/args.py, components/backends/vllm/src/dynamo/vllm/ports.py
Modularized and refactored port allocation logic, introduced port range abstractions, atomic block allocation, ETCD-backed reservation, and improved error handling.
Dockerfiles and Build Scripts
container/Dockerfile.sglang, container/Dockerfile.sglang-wideep, container/Dockerfile.tensorrt_llm, container/Dockerfile.vllm, container/build.sh, pyproject.toml, lib/llm/Cargo.toml
Updated NIXL and dependency versions, added jq/curl for health checks, pinned triton version for TRTLLM, improved pip install steps, and updated build script variables.
Operator & Helm: Grove Feature Refactor
deploy/cloud/operator/cmd/main.go, deploy/cloud/operator/internal/controller_common/predicate.go, deploy/cloud/operator/internal/consts/consts.go, deploy/cloud/operator/internal/controller/dynamographdeployment_controller.go, deploy/cloud/operator/internal/dynamo/graph.go, deploy/cloud/operator/internal/dynamo/graph_test.go, deploy/cloud/helm/platform/components/operator/templates/deployment.yaml, deploy/cloud/helm/platform/components/operator/values.yaml, deploy/cloud/helm/platform/values.yaml, deploy/cloud/helm/platform/components/operator/templates/manager-rbac.yaml, deploy/cloud/helm/dynamo-platform-values.yaml, deploy/cloud/helm/deploy.sh
Replaced Grove enable flag with termination delay parameter, added runtime Grove API detection, updated Helm and operator config, and refactored RBAC and deployment templates accordingly.
Miscellaneous Documentation Fixes
components/README.md, benchmarks/llm/README.md, deploy/cloud/README.md, deploy/inference-gateway/README.md, deploy/metrics/README.md, examples/basics/quickstart/README.md, examples/basics/multinode/README.md, docs/API/nixl_connect/connector.md, docs/architecture/dynamo_flow.md, docs/runtime/README.md, docs/guides/dynamo_run.md
Corrected relative links, updated references, and cleaned up documentation across various files.
SGLang and TRTLLM Worker Command Updates
components/backends/sglang/launch/agg_router.sh, components/backends/sglang/slurm_jobs/scripts/worker_setup.py, components/backends/sglang/slurm_jobs/README.md
Updated worker startup commands to use explicit python module invocation and adjusted batch size and model flags for clarity and correctness.
TRTLLM Backend Logic
components/backends/trtllm/src/dynamo/trtllm/main.py
Fixed dictionary key handling for kv_cache_config to ensure proper default assignment.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant EtcdClient
    participant PortsModule
    participant vLLMBackend

    User->>vLLMBackend: Start backend with CLI args (--dynamo-port-min/max)
    vLLMBackend->>PortsModule: Request port allocation block (tp_size)
    PortsModule->>PortsModule: Bind and check port block availability
    PortsModule->>EtcdClient: Reserve port block in ETCD with metadata
    PortsModule-->>vLLMBackend: Return allocated port block
    vLLMBackend->>vLLMBackend: Configure side channel and KV ports
    vLLMBackend-->>User: Backend ready with reserved ports
Loading
sequenceDiagram
    participant Operator
    participant K8sAPI
    participant GroveAPI

    Operator->>K8sAPI: Discover API groups
    K8sAPI-->>Operator: Return API groups
    Operator->>Operator: Detect Grove availability
    Operator->>Operator: Set Grove.Enabled and TerminationDelay
    Operator->>K8sAPI: Deploy resources with Grove config (if available)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • ai-dynamo/dynamo#2175: Introduces the new ports module for atomic port allocation and ETCD-backed reservation, which is now integrated into vLLM backend logic.
  • ai-dynamo/dynamo#2217: Migrates TRTLLM engine configuration files to use nested CUDA graph and cache transceiver config, directly related to the current config file changes.
  • ai-dynamo/dynamo#2190: Refactors Grove feature configuration in the operator and Helm charts, replacing the enable flag with a termination delay and runtime detection, matching operator and deployment changes here.

Poem

A rabbit hops through docs and YAML fields,
Polishing links and what each backend yields.
Ports are reserved, configs now neat,
With health checks and guides, the docs are complete.
Grove waits with patience, its delay set anew—
Kubernetes and code, all fresh as morning dew!
🐇✨

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.2.2)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/product/migration-guide for migration instructions

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@mohammedabdulwahhab mohammedabdulwahhab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@biswapanda @atchernych I noticed that this already was merged ot main, but will this work? These values don't seem too reasonable to me as they give the worker 30 seconds to fully bootstrap.

@biswapanda
Copy link
Contributor Author

This is not a release blocker for 04.0 closing the PR

@biswapanda biswapanda closed this Aug 5, 2025
auto-merge was automatically disabled August 5, 2025 06:05

Pull request was closed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants