Skip to content

Conversation

@WoosukKwon
Copy link
Collaborator

No description provided.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a large-scale cleanup, removing the V0 Sequence and Sampler implementations and related code. The changes are extensive and touch many files, including removing entire modules. The review focuses on ensuring that the removal is clean and doesn't leave behind confusing artifacts like commented-out code or placeholder classes. I've identified a few areas where the cleanup could be improved for better maintainability and to prevent potential issues.

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
@mergify mergify bot added multi-modality Related to multi-modality (#4194) v1 labels Sep 21, 2025
@WoosukKwon WoosukKwon added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 21, 2025
WoosukKwon and others added 5 commits September 20, 2025 21:55
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
@mergify mergify bot added qwen Related to Qwen models tool-calling labels Sep 21, 2025
@WoosukKwon WoosukKwon merged commit 26e673f into main Sep 21, 2025
52 of 53 checks passed
@WoosukKwon WoosukKwon deleted the woosuk/rm-v0-seq branch September 21, 2025 15:52
kingsmad pushed a commit to kingsmad/vllm that referenced this pull request Sep 22, 2025
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
kzawora-intel added a commit to vllm-project/vllm-gaudi that referenced this pull request Sep 22, 2025
Culprit commit : vllm-project/vllm#25332 and
vllm-project/vllm#25321 and
vllm-project/vllm#25366

---------

Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai>
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
Co-authored-by: Konrad Zawora <kzawora@habana.ai>
wangxiyuan pushed a commit to vllm-project/vllm-ascend that referenced this pull request Sep 22, 2025
### What this PR does / why we need it?
This pr bump vllm commit hash to
vllm-project/vllm@5aeb925
fix issues:  
1. vllm-project/vllm#25345 has remove v0
metadata
2. vllm-project/vllm#25332
3. vllm-project/vllm#25334
4. vllm-project/vllm#23558, note that this vllm
commit update the model register logic, which will check all the model
registered have the `vllm.model_executor.models` path , which breaks our
custom registration of the deepseek_v3 model (it doesn't exist in the
vllm model path). so I move deepseek_v3 model registy to deepseek_v2 to
solve temporary

### How was this patch tested?

- vLLM version: v0.10.2
- vLLM main:
vllm-project/vllm@9607d5e

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
slokesha pushed a commit to slokesha/vllm-gaudi that referenced this pull request Sep 24, 2025
Culprit commit : vllm-project/vllm#25332 and
vllm-project/vllm#25321 and
vllm-project/vllm#25366

---------

Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai>
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
Co-authored-by: Konrad Zawora <kzawora@habana.ai>
Signed-off-by: slokesha <slokeshappa@habana.ai>
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
charlifu pushed a commit to ROCm/vllm that referenced this pull request Sep 25, 2025
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Signed-off-by: charlifu <charlifu@amd.com>
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Angazenn pushed a commit to Angazenn/vllm-ascend that referenced this pull request Oct 21, 2025
### What this PR does / why we need it?
This pr bump vllm commit hash to
vllm-project/vllm@5aeb925
fix issues:  
1. vllm-project/vllm#25345 has remove v0
metadata
2. vllm-project/vllm#25332
3. vllm-project/vllm#25334
4. vllm-project/vllm#23558, note that this vllm
commit update the model register logic, which will check all the model
registered have the `vllm.model_executor.models` path , which breaks our
custom registration of the deepseek_v3 model (it doesn't exist in the
vllm model path). so I move deepseek_v3 model registy to deepseek_v2 to
solve temporary

### How was this patch tested?

- vLLM version: v0.10.2
- vLLM main:
vllm-project/vllm@9607d5e

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

multi-modality Related to multi-modality (#4194) qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding tool-calling v1

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants