-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
[V0 Deprecation] Remove V0 Sequence class & Sampler #25332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request is a large-scale cleanup, removing the V0 Sequence and Sampler implementations and related code. The changes are extensive and touch many files, including removing entire modules. The review focuses on ensuring that the removal is clean and doesn't leave behind confusing artifacts like commented-out code or placeholder classes. I've identified a few areas where the cleanup could be improved for better maintainability and to prevent potential issues.
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Culprit commit : vllm-project/vllm#25332 and vllm-project/vllm#25321 and vllm-project/vllm#25366 --------- Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai> Signed-off-by: Konrad Zawora <kzawora@habana.ai> Co-authored-by: Konrad Zawora <kzawora@habana.ai>
### What this PR does / why we need it? This pr bump vllm commit hash to vllm-project/vllm@5aeb925 fix issues: 1. vllm-project/vllm#25345 has remove v0 metadata 2. vllm-project/vllm#25332 3. vllm-project/vllm#25334 4. vllm-project/vllm#23558, note that this vllm commit update the model register logic, which will check all the model registered have the `vllm.model_executor.models` path , which breaks our custom registration of the deepseek_v3 model (it doesn't exist in the vllm model path). so I move deepseek_v3 model registy to deepseek_v2 to solve temporary ### How was this patch tested? - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@9607d5e --------- Signed-off-by: wangli <wangli858794774@gmail.com>
Culprit commit : vllm-project/vllm#25332 and vllm-project/vllm#25321 and vllm-project/vllm#25366 --------- Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai> Signed-off-by: Konrad Zawora <kzawora@habana.ai> Co-authored-by: Konrad Zawora <kzawora@habana.ai> Signed-off-by: slokesha <slokeshappa@habana.ai>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai> Signed-off-by: charlifu <charlifu@amd.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai> Signed-off-by: yewentao256 <zhyanwentao@126.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai>
### What this PR does / why we need it? This pr bump vllm commit hash to vllm-project/vllm@5aeb925 fix issues: 1. vllm-project/vllm#25345 has remove v0 metadata 2. vllm-project/vllm#25332 3. vllm-project/vllm#25334 4. vllm-project/vllm#23558, note that this vllm commit update the model register logic, which will check all the model registered have the `vllm.model_executor.models` path , which breaks our custom registration of the deepseek_v3 model (it doesn't exist in the vllm model path). so I move deepseek_v3 model registy to deepseek_v2 to solve temporary ### How was this patch tested? - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@9607d5e --------- Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: Woosuk Kwon <woosuk@thinkingmachines.ai> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
No description provided.