Skip to content

Conversation

@SageMoore
Copy link
Contributor

@SageMoore SageMoore commented Sep 26, 2025

Purpose

This PR reduces the memory footprint of cudagraphs when running with DBO by only constructing non-dbo cudagraphs for shapes that DBO doesn't support.

Test Plan

lm_eval deepseek-ai/DeepSeek-V2-Lite

|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.3633|±  |0.0278|
|     |       |strict-match    |     5|exact_match|↑  |0.3567|±  |0.0277|

Test Result

Sizes before

(EngineCore_DP4 pid=82745) INFO 09-26 08:51:47 [gpu_model_runner.py:3443] Graph capturing finished in 75 secs, took -4.58 GiB
(EngineCore_DP14 pid=80459) INFO 09-26 08:51:47 [gpu_model_runner.py:3443] Graph capturing finished in 75 secs, took -4.57 GiB

Sizes after

(EngineCore_DP0 pid=675149) INFO 09-26 18:32:37 [v1/worker/gpu_model_runner.py:3443] Graph capturing finished in 28 secs, took 3.43 GiB
(EngineCore_DP1 pid=675150) INFO 09-26 18:32:37 [v1/worker/gpu_model_runner.py:3443] Graph capturing finished in 28 secs, took 3.43 GiB

This size can be further reduced by running with full cudagraphs only

(EngineCore_DP1 pid=3384884) INFO 09-26 16:37:22 [gpu_model_runner.py:3375] Graph capturing finished in 18 secs, took 1.99 GiB
(EngineCore_DP0 pid=3384883) INFO 09-26 16:37:22 [gpu_model_runner.py:3375] Graph capturing finished in 18 secs, took 1.99 GiB

Which is around what non-DBO is with both styles of cudagraphs turned on

(EngineCore_DP0 pid=3307545) INFO 09-26 16:24:57 [gpu_model_runner.py:3375] Graph capturing finished in 18 secs, took 2.12 GiB
(EngineCore_DP1 pid=3307546) INFO 09-26 16:24:57 [gpu_model_runner.py:3375] Graph capturing finished in 18 secs, took 2.12 GiB

Signed-off-by: Sage Moore <sage@neuralmagic.com>
Signed-off-by: Sage Moore <sage@neuralmagic.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an effective optimization to reduce the memory footprint of CUDA graphs when using Dynamic Batching and Overlapping (DBO). The main change avoids capturing both microbatched and non-microbatched graphs for the same shape, instead capturing only the appropriate graph type. This directly contributes to lower memory usage. A corresponding change correctly handles runtime scenarios where microbatching is aborted, by falling back to eager execution to prevent graph mismatches. The implementation is clean, logical, and appears to be correct. I have not identified any issues of high or critical severity.

@tlrmchlsmth tlrmchlsmth added this to the v0.11.0 Cherry Picks milestone Sep 26, 2025
@tlrmchlsmth tlrmchlsmth added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 26, 2025
@tlrmchlsmth tlrmchlsmth enabled auto-merge (squash) September 26, 2025 21:11
@tlrmchlsmth tlrmchlsmth merged commit 4778b42 into vllm-project:main Sep 26, 2025
42 checks passed
simon-mo pushed a commit that referenced this pull request Sep 28, 2025
Signed-off-by: Sage Moore <sage@neuralmagic.com>
Signed-off-by: simon-mo <simon.mo@hey.com>
pdasigi pushed a commit to pdasigi/vllm that referenced this pull request Oct 2, 2025
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
Signed-off-by: Sage Moore <sage@neuralmagic.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
…oject#25779)

Signed-off-by: Sage Moore <sage@neuralmagic.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
…oject#25779)

Signed-off-by: Sage Moore <sage@neuralmagic.com>
Signed-off-by: simon-mo <simon.mo@hey.com>
shyeh25 pushed a commit to shyeh25/vllm that referenced this pull request Oct 14, 2025
…oject#25779)

Signed-off-by: Sage Moore <sage@neuralmagic.com>
Signed-off-by: simon-mo <simon.mo@hey.com>
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
alhridoy pushed a commit to alhridoy/vllm that referenced this pull request Oct 24, 2025
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
…oject#25779)

Signed-off-by: Sage Moore <sage@neuralmagic.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
rtourgeman pushed a commit to rtourgeman/vllm that referenced this pull request Nov 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants