-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
[BugFix] Fix handling of num_computed_tokens with connector #18232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
vllm-project#18001 changed the behaviour subtly and broke some multi-connector cases. This change ensures we don't call the connector get_num_new_matched_tokens method a second time for a given request after an async load has completed. Signed-off-by: Nick Hill <nhill@redhat.com>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: Nick Hill <nhill@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Sorry for not catching this bug in my review.
| # This information is used to determine if a load is | ||
| # needed for this request. | ||
| if self.connector is not None: | ||
| if num_external_computed_tokens: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is right. We should let the connector decide what to if num_external_computed_tokens=0
For instance, this will cause a memory leak on the P worker if the D worker has a full prefix cache hit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @robertgshaw2-redhat I considered this and thought it was ok but got P/D mixed up and you're right.
It will mean we need to rethink some things w.r.t. the multi-connector impl though since we currently cycle through the connectors in get_num_new_matched_tokens until the first one that returns nonzero.
I think this could be addressed by handling that case for nixl in get_num_new_matched_tokens itself, I'll add a change for that.
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nicolò Lucchesi <nicolo.lucchesi@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Sorry for the bug in my PR.
Do we need to add a comment in KVConnectorBase_V1.get_num_new_matched_tokens to show the constraint that connect should only return load_kv_async=True when num_external_computed_tokens>0 to help people to implement new connectors?
|
Thanks @heheda12345 yes that's a good idea re making that clear in the comment. Really the value of |
…ject#18232) Signed-off-by: Nick Hill <nhill@redhat.com> Co-authored-by: Nicolò Lucchesi <nicolo.lucchesi@gmail.com> Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
#18001 changed the behaviour subtly and broke some multi-connector cases.
This change ensures we don't call the connector
get_num_new_matched_tokensmethod a second time for a given request after an async load has completed.Some additional explanation:
For async loading there are kind of two passes int the scheduler for the request. The first time, the connector methods are called and it goes into
WAITING_FOR_REMOTE_KVSstate.Once the async load is ready, it goes through the "waiting" part of the schedule() method again for that request... previously it was calling the connector methods again when this happened, even though we don't want to do any more since the computation is now done.
This originally wasn't causing a problem because the nixl connector unsets the flag in the transfer params to that it's ignored the second tim.
But in the multi-connector, when get_num_matched_tokens returns 0 from one of the connectors it moves on to the next one ... so in this second pass we were actually triggering the LMCache connector.
So this change makes things more explicit/robust in the scheduler w.r.t. when the connectors are invoked. Similarly it also only calls
update_state_after_alloc if get_num_matched_tokensreturned nonzero.. since in terms of the API contract it doesn't really make sense to invoke the connector after the allocation if it's saying that it is not providing any tokens.cc @robertgshaw2-redhat @heheda12345 @WoosukKwon