-
Notifications
You must be signed in to change notification settings - Fork 690
feat: undo coderabbit requested changes #1813
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAll deployment YAML files for various LLM configurations were updated to use a fixed container image version ( Changes
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (12)
examples/vllm_v0/deploy/disagg_planner.yaml (2)
37-37: Same image-immutability concern as aboveSee earlier note on preferring digest-pinned images over mutable tags.
Also applies to: 65-65, 93-93, 119-119, 145-145
58-58: Repeat of thegpuresource-name issueIdentical concern as raised for the v1 manifest – please validate cluster support.
Also applies to: 62-62, 86-86, 90-90
examples/vllm_v1/deploy/agg.yaml (2)
37-37: Tag still mutable – recommend digest pinningSame reasoning as previous manifests: lock the image by digest.
Also applies to: 62-62, 89-89
82-82: Non-standardgpuresource keySame scheduler-compatibility concern applies here.
Also applies to: 86-86
examples/llm/deploy/agg.yaml (2)
37-37: Pin image by digest for deterministic rolloutsApplying the digest recommendation here will keep runtime behaviour consistent.
Also applies to: 62-62, 89-89
82-82: Verifygpuresource availabilitySame comment as earlier manifests.
Also applies to: 86-86
examples/vllm_v0/deploy/agg.yaml (2)
37-37: Digest pinning suggestedReplicating the image-immutability suggestion.
Also applies to: 64-64
57-57:gpuresource key – double-check cluster configReplicating the GPU resource-name concern.
Also applies to: 61-61
examples/vllm_v0/deploy/disagg.yaml (1)
54-61: Same GPU-key concern as aboveThe resource name
gpuwill be ignored by the default NVIDIA device-plugin.
See previous comment inexamples/llm/deploy/disagg.yaml.Also applies to: 80-88
examples/llm/deploy/disagg_router.yaml (1)
103-111: GPU resource key checkMirrors the risk highlighted earlier – verify your cluster supports the custom
gpukey.Also applies to: 130-138
examples/llm/deploy/agg_router.yaml (1)
103-111: Confirm custom GPU resource keySame concern regarding
gpuvsnvidia.com/gpu.examples/vllm_v1/deploy/disagg.yaml (1)
78-86: Validate non-standard GPU resource nameEnsure the cluster exposes
gpuas an extended resource or pods will not schedule.Also applies to: 105-113
🧹 Nitpick comments (1)
examples/vllm_v1/deploy/disagg_planner.yaml (1)
37-37: Pin the image with a digest for full immutabilityGreat move switching away from the
latesttag, but relying solely on a version tag still allows the registry owner to re-push the same tag with different bits.
Consider pinning to the SHA-256 digest to guarantee bit-for-bit reproducibility across environments.- image: nvcr.io/nvidia/ai-dynamo/vllm-runtime:0.3.1 + image: nvcr.io/nvidia/ai-dynamo/vllm-runtime@sha256:<digest>Also applies to: 63-63, 91-91, 119-119, 145-145, 171-171
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
examples/llm/deploy/agg.yaml(3 hunks)examples/llm/deploy/agg_router.yaml(4 hunks)examples/llm/deploy/disagg.yaml(4 hunks)examples/llm/deploy/disagg_router.yaml(5 hunks)examples/vllm_v0/deploy/agg.yaml(2 hunks)examples/vllm_v0/deploy/disagg.yaml(3 hunks)examples/vllm_v0/deploy/disagg_planner.yaml(5 hunks)examples/vllm_v1/deploy/agg.yaml(3 hunks)examples/vllm_v1/deploy/disagg.yaml(4 hunks)examples/vllm_v1/deploy/disagg_planner.yaml(6 hunks)
🧰 Additional context used
🧠 Learnings (10)
examples/vllm_v0/deploy/disagg_planner.yaml (2)
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level `PodSecurityContext` with `runAsUser`, `runAsGroup`, and `fsGroup` all set to `1000`, and then selectively override the user at the individual container level (e.g., `RunAsUser: 0` for Kaniko) when root is required.
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
examples/vllm_v1/deploy/agg.yaml (2)
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level `PodSecurityContext` with `runAsUser`, `runAsGroup`, and `fsGroup` all set to `1000`, and then selectively override the user at the individual container level (e.g., `RunAsUser: 0` for Kaniko) when root is required.
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
examples/llm/deploy/agg_router.yaml (2)
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level `PodSecurityContext` with `runAsUser`, `runAsGroup`, and `fsGroup` all set to `1000`, and then selectively override the user at the individual container level (e.g., `RunAsUser: 0` for Kaniko) when root is required.
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
examples/llm/deploy/agg.yaml (2)
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level `PodSecurityContext` with `runAsUser`, `runAsGroup`, and `fsGroup` all set to `1000`, and then selectively override the user at the individual container level (e.g., `RunAsUser: 0` for Kaniko) when root is required.
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
examples/vllm_v1/deploy/disagg.yaml (2)
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level `PodSecurityContext` with `runAsUser`, `runAsGroup`, and `fsGroup` all set to `1000`, and then selectively override the user at the individual container level (e.g., `RunAsUser: 0` for Kaniko) when root is required.
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
examples/vllm_v1/deploy/disagg_planner.yaml (2)
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level `PodSecurityContext` with `runAsUser`, `runAsGroup`, and `fsGroup` all set to `1000`, and then selectively override the user at the individual container level (e.g., `RunAsUser: 0` for Kaniko) when root is required.
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
examples/llm/deploy/disagg_router.yaml (2)
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level `PodSecurityContext` with `runAsUser`, `runAsGroup`, and `fsGroup` all set to `1000`, and then selectively override the user at the individual container level (e.g., `RunAsUser: 0` for Kaniko) when root is required.
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
examples/vllm_v0/deploy/agg.yaml (2)
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level `PodSecurityContext` with `runAsUser`, `runAsGroup`, and `fsGroup` all set to `1000`, and then selectively override the user at the individual container level (e.g., `RunAsUser: 0` for Kaniko) when root is required.
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
examples/vllm_v0/deploy/disagg.yaml (2)
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level `PodSecurityContext` with `runAsUser`, `runAsGroup`, and `fsGroup` all set to `1000`, and then selectively override the user at the individual container level (e.g., `RunAsUser: 0` for Kaniko) when root is required.
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
examples/llm/deploy/disagg.yaml (2)
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level `PodSecurityContext` with `runAsUser`, `runAsGroup`, and `fsGroup` all set to `1000`, and then selectively override the user at the individual container level (e.g., `RunAsUser: 0` for Kaniko) when root is required.
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and Test - vllm
🔇 Additional comments (6)
examples/vllm_v1/deploy/disagg_planner.yaml (1)
84-84:gpuis not a standard extended-resource name – verify scheduler compatibilityKubernetes default GPU device-plugin advertises
nvidia.com/gpu; switching to a plaingpuresource will be ignored unless:
- the cluster nodes expose an extended resource named exactly
gpu, or- you are using the new
ResourceClassAPI with a matchingresourceName.Please confirm that the cluster has been configured accordingly, otherwise pods will never be scheduled on GPU nodes.
Also applies to: 88-88, 112-112, 116-116
examples/llm/deploy/disagg.yaml (1)
37-38: Good move: pinning images to0.3.1Pinning removes the non-determinism of the
latesttag and helps with rollbacks / CVE triage.Also applies to: 62-63, 89-90, 116-117
examples/vllm_v0/deploy/disagg.yaml (1)
37-38: Image tag pinned – looks goodAlso applies to: 64-65, 91-92
examples/llm/deploy/disagg_router.yaml (1)
37-38: Pinned image tag acknowledgedAlso applies to: 62-63, 87-88, 114-115, 141-142
examples/llm/deploy/agg_router.yaml (1)
37-38: Pinned image tag OKAlso applies to: 62-63, 87-88, 114-115
examples/vllm_v1/deploy/disagg.yaml (1)
37-38: Image pinning confirmedAlso applies to: 62-63, 89-90, 116-117
|
LGTM |
Overview:
CodeRabbit had requested some changes on mr #1766 that broke the deploy yamls. This MR fixes them
Details:
Where should the reviewer start?
Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)
Summary by CodeRabbit