Skip to content

Conversation

@njhill
Copy link
Member

@njhill njhill commented May 15, 2025

#18001 changed the behaviour subtly and broke some multi-connector cases.

This change ensures we don't call the connector get_num_new_matched_tokens method a second time for a given request after an async load has completed.

Some additional explanation:

For async loading there are kind of two passes int the scheduler for the request. The first time, the connector methods are called and it goes into WAITING_FOR_REMOTE_KVS state.

Once the async load is ready, it goes through the "waiting" part of the schedule() method again for that request... previously it was calling the connector methods again when this happened, even though we don't want to do any more since the computation is now done.

This originally wasn't causing a problem because the nixl connector unsets the flag in the transfer params to that it's ignored the second tim.

But in the multi-connector, when get_num_matched_tokens returns 0 from one of the connectors it moves on to the next one ... so in this second pass we were actually triggering the LMCache connector.

So this change makes things more explicit/robust in the scheduler w.r.t. when the connectors are invoked. Similarly it also only calls update_state_after_alloc if get_num_matched_tokens returned nonzero.. since in terms of the API contract it doesn't really make sense to invoke the connector after the allocation if it's saying that it is not providing any tokens.

cc @robertgshaw2-redhat @heheda12345 @WoosukKwon

vllm-project#18001 changed the behaviour subtly and broke some multi-connector cases.

This change ensures we don't call the connector get_num_new_matched_tokens method a second time for a given request after an async load has completed.

Signed-off-by: Nick Hill <nhill@redhat.com>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@njhill njhill added the bug Something isn't working label May 15, 2025
@mergify mergify bot added the v1 label May 15, 2025
@njhill njhill requested a review from heheda12345 May 15, 2025 23:27
@njhill njhill added the ready ONLY add when PR is ready to merge/full CI is needed label May 15, 2025
Signed-off-by: Nick Hill <nhill@redhat.com>
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Sorry for not catching this bug in my review.

# This information is used to determine if a load is
# needed for this request.
if self.connector is not None:
if num_external_computed_tokens:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is right. We should let the connector decide what to if num_external_computed_tokens=0

For instance, this will cause a memory leak on the P worker if the D worker has a full prefix cache hit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @robertgshaw2-redhat I considered this and thought it was ok but got P/D mixed up and you're right.

It will mean we need to rethink some things w.r.t. the multi-connector impl though since we currently cycle through the connectors in get_num_new_matched_tokens until the first one that returns nonzero.

I think this could be addressed by handling that case for nixl in get_num_new_matched_tokens itself, I'll add a change for that.

Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nicolò Lucchesi <nicolo.lucchesi@gmail.com>
@simon-mo simon-mo added this to the v0.9.0 milestone May 16, 2025
Copy link
Collaborator

@heheda12345 heheda12345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Sorry for the bug in my PR.
Do we need to add a comment in KVConnectorBase_V1.get_num_new_matched_tokens to show the constraint that connect should only return load_kv_async=True when num_external_computed_tokens>0 to help people to implement new connectors?

@njhill
Copy link
Member Author

njhill commented May 18, 2025

Thanks @heheda12345 yes that's a good idea re making that clear in the comment. Really the value of load_kv_async is just not applicable when num_external_computed_tokens == 0.

@simon-mo simon-mo merged commit 1b15df2 into vllm-project:main May 19, 2025
67 of 69 checks passed
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
…ject#18232)

Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nicolò Lucchesi <nicolo.lucchesi@gmail.com>
Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants