Skip to content

Conversation

@Sugar-zsg
Copy link
Contributor

@Sugar-zsg Sugar-zsg commented Oct 15, 2025

This issue was discovered while testing a previous PR.(#25208)

When running inference with the Whisper model, using CUDAGraphMode=FULL_DECODE_ONLY, I observed the following behavior:

This prompt works correctly and uses CUDA Graph:

{
    "prompt": "<|startoftranscript|><|zh|><|transcribe|><|notimestamps|>",
    "multi_modal_data": {
        "audio": (audio_waveform, None)
    }
}

This prompt fails to reuse encoder results (the first decoder step switches to FULL mode):

{
    "prompt": "<|startoftranscript|>",
    "multi_modal_data": {
        "audio": (audio_waveform, None)
    }
}

This PR fixes an issue where, when using CUDA Graph, a prompt containing only a single token causes uniform_decode=True during the prefill phase, preventing the use of encoder outputs.

Purpose

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

…se when using CUDA Graph and the prompt contains only a single token.

Signed-off-by: Sugar-zsg <952242923@qq.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix an issue where uniform_decode is incorrectly enabled for single-token prompts in CUDA graph mode, which can prevent the use of encoder outputs. The approach is to add a helper function _has_prefill_tokens_scheduled to detect if any request is still in the prefill phase and disable uniform_decode accordingly.

My review found a critical issue in the implementation. The new helper function is called with an incorrect argument, which makes the fix ineffective. I've provided a detailed comment and a code suggestion to resolve this bug. Once fixed, the change should correctly address the described problem.

Sugar-zsg and others added 4 commits October 15, 2025 16:08
Signed-off-by: Sugar-zsg <952242923@qq.com>
Signed-off-by: Sugar-zsg <952242923@qq.com>
Signed-off-by: Sugar-zsg <952242923@qq.com>
@Sugar-zsg
Copy link
Contributor Author

Could you please review this PR when you have time ? thanks. @russellb

@Sugar-zsg
Copy link
Contributor Author

when use the second conf(with single-token prompt and CUDAGraphMode=FULL_DECODE_ONLY)

Before:

transcription result: Thank you.
transcription result: Thank you.


With this PR:

transcription result: The first words I spoke in the original phonograph a little piece of practical poetry. Mary had her little lamb it sleet was white as snow and everywhere that Mary went, the Lamb would sure to go!

transcription result: And the old one pitch on the way to Edgar Martinez swung on the line down the left field line for a base hit. Here comes Joy. Here is Junior to third base. They're going to wave him in. The throw to the plate will be late. The Mariners are going to play for the American League Championship. I don't believe it. It just continues. My oh my.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant