Skip to content

Conversation

@celestialli
Copy link
Contributor

@celestialli celestialli commented Apr 28, 2025

What this PR does / why we need it?

Fix CPU memory leak issue as is stated in vllm-project/vllm#16472

Does this PR introduce any user-facing change?

No

How was this patch tested?

With CI

@celestialli celestialli changed the title patch from_seq_group to clear finished seq in seq_id_to_seq_group [0.7.3] patch from_seq_group to clear finished seq in seq_id_to_seq_group Apr 28, 2025
Signed-off-by: Shuqiao Li <celestialli@outlook.com>
@wangxiyuan
Copy link
Collaborator

add the patch file to patch/__init__.py as well

@celestialli
Copy link
Contributor Author

add the patch file to patch/__init__.py as well

Then it will be patched more than once, is that what we are expecting?

@wangxiyuan
Copy link
Collaborator

add the patch file to patch/__init__.py as well

Then it will be patched more than once, is that what we are expecting?

oh, never mind, I just relise this a global patch is for 0.7.3

@celestialli
Copy link
Contributor Author

add the patch file to patch/__init__.py as well

Then it will be patched more than once, is that what we are expecting?

oh, never mind, I just relise this a global patch is for 0.7.3

Gotcha, this PR is good to merge.

Copy link
Collaborator

@wangxiyuan wangxiyuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ganyi1996ppo
Copy link
Collaborator

LGTM, I'll merge this

@ganyi1996ppo ganyi1996ppo merged commit e1d13fc into vllm-project:v0.7.3-dev Apr 28, 2025
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants