-
-
Notifications
You must be signed in to change notification settings - Fork 11.2k
[V0 Deprecation] Remove async_output_proc, preemption mode, delay factor #25334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively removes several deprecated V0 features, namely async_output_proc, preemption_mode, and delay_factor. The changes are comprehensive, touching configuration, argument parsing, entrypoints, executors, platform-specific code, and tests. The removal of these features simplifies the codebase and improves maintainability. The related tests have been updated or removed accordingly, ensuring that the test suite remains relevant. I've reviewed the changes and found them to be consistent and correct. I did not find any issues of high or critical severity.
…tor (vllm-project#25334) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
### What this PR does / why we need it? This pr bump vllm commit hash to vllm-project/vllm@5aeb925 fix issues: 1. vllm-project/vllm#25345 has remove v0 metadata 2. vllm-project/vllm#25332 3. vllm-project/vllm#25334 4. vllm-project/vllm#23558, note that this vllm commit update the model register logic, which will check all the model registered have the `vllm.model_executor.models` path , which breaks our custom registration of the deepseek_v3 model (it doesn't exist in the vllm model path). so I move deepseek_v3 model registy to deepseek_v2 to solve temporary ### How was this patch tested? - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@9607d5e --------- Signed-off-by: wangli <wangli858794774@gmail.com>
…tor (vllm-project#25334) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
…tor (vllm-project#25334) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: charlifu <charlifu@amd.com>
…tor (#25334) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: yewentao256 <zhyanwentao@126.com>
…tor (vllm-project#25334) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
…tor (vllm-project#25334) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
…tor (vllm-project#25334) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
### What this PR does / why we need it? This pr bump vllm commit hash to vllm-project/vllm@5aeb925 fix issues: 1. vllm-project/vllm#25345 has remove v0 metadata 2. vllm-project/vllm#25332 3. vllm-project/vllm#25334 4. vllm-project/vllm#23558, note that this vllm commit update the model register logic, which will check all the model registered have the `vllm.model_executor.models` path , which breaks our custom registration of the deepseek_v3 model (it doesn't exist in the vllm model path). so I move deepseek_v3 model registy to deepseek_v2 to solve temporary ### How was this patch tested? - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@9607d5e --------- Signed-off-by: wangli <wangli858794774@gmail.com>
…tor (vllm-project#25334) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
No description provided.