- 
          
 - 
                Notifications
    
You must be signed in to change notification settings  - Fork 11k
 
[V1][spec decode] return logprobs for spec decoding #26060
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V1][spec decode] return logprobs for spec decoding #26060
Conversation
| 
           This pull request has merge conflicts that must be resolved before it can be  | 
    
48bc380    to
    cc8bc92      
    Compare
  
    2cf8c8b    to
    3969a0c      
    Compare
  
    2124147    to
    5797161      
    Compare
  
    There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks again @TheEpicDolphin
| 
           @TheEpicDolphin could you merge in latest main, this will be needed for the CI to pass. Edit: Nevermind, I forgot there's a button I can push for this!  | 
    
881eb24    to
    ac3dbfa      
    Compare
  
    | 
           @TheEpicDolphin it looks like the new test is failing in CI with OOM: https://buildkite.com/vllm/ci/builds/35910#019a0c97-18aa-4e85-bf17-af92608da61e  | 
    
          
 Thanks, looking into it  | 
    
8fe10f3    to
    fa5c0da      
    Compare
  
    Signed-off-by: Giancarlo Delfin <gdelfin@meta.com>
fa5c0da    to
    8636461      
    Compare
  
    | 
           @njhill i resolved the failing tests :)  | 
    
| 
           Thanks!  | 
    
Signed-off-by: Giancarlo Delfin <gdelfin@meta.com> Co-authored-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Giancarlo Delfin <gdelfin@meta.com> Co-authored-by: Nick Hill <nhill@redhat.com> Signed-off-by: Alberto Perdomo <aperdomo@redhat.com>
…o step_forward * 'step_forward' of https://github.com/raindaywhu/vllm: (148 commits) [Model] Add MoE support for NemotronH (vllm-project#25863) [Metrics] [KVConnector] Add connector prefix cache hit rate stats (vllm-project#26245) [CI] Reorganize entrypoints tests (vllm-project#27403) add SLA information into comparison graph for vLLM Benchmark Suite (vllm-project#25525) [CI/Build] Fix AMD CI: test_cpu_gpu.py (vllm-project#27388) [Bugfix] Fix args settings for guided decoding args (vllm-project#27375) [CI/Build] Fix Prithvi plugin test (vllm-project#27393) [Chore] Remove duplicate `has_` functions in vllm.utils (vllm-project#27372) [Model] Add num_cached_tokens for PoolingRequestOutput (vllm-project#27378) [V1][spec decode] return logprobs for spec decoding (vllm-project#26060) [CORE] Support Prefix Caching with Prompt Embeds (vllm-project#27219) [Bugfix][Core] running queue index leakage exception (vllm-project#26754) [Bugfix] Fix incorrect kv cache metrics in grafana.json (vllm-project#27133) [Bugfix] Fix SLA tuner initialization (vllm-project#27355) [Bugfix] Fix deepseek-ocr multi-image inference and add `merge_by_field_config=True` with tensor schema support (vllm-project#27361) [MLA] Bump FlashMLA (vllm-project#27354) [Chore] Separate out system utilities from vllm.utils (vllm-project#27201) [BugFix] bugfix for Flash Attention MLA with full cuda graph IMA following pr-25490 (vllm-project#27128) [Feature] publisher default set zmq in kv_event config (vllm-project#26915) [Prefix Cache] Use LoRA name for consistent KV-cache block hashing (vllm-project#27211) ...
Signed-off-by: Giancarlo Delfin <gdelfin@meta.com> Co-authored-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Giancarlo Delfin <gdelfin@meta.com> Co-authored-by: Nick Hill <nhill@redhat.com> Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
Signed-off-by: Giancarlo Delfin <gdelfin@meta.com> Co-authored-by: Nick Hill <nhill@redhat.com> Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
### What this PR does / why we need it? vllm-project/vllm@c9461e0 Fix ```spec decode rejection sampler```, caused by vllm-project/vllm#26060 Fix some ```import```, caused by vllm-project/vllm#27374 Fix ```scheduler_config.send_delta_data```, caused by #3719 Fix ```init_with_cudagraph_sizes```, caused by vllm-project/vllm#26016 Fix ```vl model```of replacing PatchEmbed's conv3d to linear layer, caused by vllm-project/vllm#27418 ### Does this PR introduce _any_ user-facing change? N/A ### How was this patch tested? CI passed with new added/existing test. - vLLM version: v0.11.0rc3 - vLLM main: vllm-project/vllm@c9461e0 --------- Signed-off-by: Icey <1790571317@qq.com>
Purpose
Add support for returning logprobs for v1 spec decoding.
Test Plan
Automated Tests
Added automated testing for logprobs which compares spec decode LLM per-token output logprobs with those of a reference model. The comparison is done for:
raw_logits,raw_logprobs,processed_logits, andprocessed_logprobs.Rejection sampler still works as expected:
Manual Test
Setup
Ran the following for spec decode LLM server:
And the following for standard decode LLM server:
Single Request Test
Use the following to send a request:
Verified that both standard LLM and spec decode LLM return identical logprobs outputs:
Multiple Request Test
Used the following script to send 4 concurrent requests:
The output logprobs for both standard and spec decode LLMs are approximately the same.
Standard decode LLM logprobs:
Spec decode LLM logprobs: