-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
[WIP][v1] Support for returning a value when using wait_for_save #21698
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP][v1] Support for returning a value when using wait_for_save #21698
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for a return value from wait_for_save to notify the scheduler about successfully dumped blocks. The changes are logical, but I've identified a critical bug in the scheduler where the new logic is outside the loop it depends on, which would lead to incorrect behavior. I've also suggested adding type hints in a few places to improve code clarity and maintainability.
vllm/v1/core/sched/scheduler.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block of code appears to be outside the for request in self.running: loop, but it uses the loop variables request and req_id. This will cause the logic to only apply to the last request processed in the loop, which is likely a bug. This block should be moved inside the for loop to ensure it's executed for every running request.
Additionally, the indentation on the second line of this block is inconsistent.
| if model_runner_output.finished_dumping is not None: | |
| request.succeed_dumped_blocks.extend(model_runner_output.finished_dumping.get(req_id, [])) | |
| if model_runner_output.finished_dumping is not None: | |
| request.succeed_dumped_blocks.extend(model_runner_output.finished_dumping.get(req_id, [])) |
vllm/v1/request.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vllm/v1/worker/gpu_model_runner.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The return type hint for maybe_wait_for_kv_save was removed when the function was changed to return a value. To maintain code clarity and type safety, please add the appropriate return type hint. Based on the type of finished_dumping in ModelRunnerOutput, it should be Optional[dict[str, list[str]]].
| def maybe_wait_for_kv_save(): | |
| def maybe_wait_for_kv_save() -> Optional[dict[str, list[str]]]: |
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
374a9b5 to
665185d
Compare
Signed-off-by: flesher0813 <1208954694@qq.com>
665185d to
cbd934d
Compare
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.Purpose
Current connectors do not specify whether wait_for_save returns a value.
This PR adds an explicit return value so that workers can inform the scheduler whether the request’s blocks were dumped successfully.
Test Plan
Test with test_multi_connector.py
Test Result
TODO
(Optional) Documentation Update