Skip to content

Conversation

@sarckk
Copy link
Collaborator

@sarckk sarckk commented Aug 11, 2025

Summary:
#21588 added support for multiple attention metadata builders per kv-cache spec.

As part of this change, each KV cache group now maps to one or more AttentionGroup, with one attention group being created for each type of attention backend used.

However, if we want to enable KV sharing when we have more than one attention group, we run into the following assertion:

            assert len(attn_groups[group_idx]) == 1, (
                "Only one attention group per KV cache group is supported "
                "for KV-cache sharing for now.")

This PR adds support to make this implementation more flexible, such that we can support KV cache sharing when there are multiple attention groups per KV cache group.

Test Plan:
new added unit test passes:

pytest tests/v1/test_kv_sharing.py 

Rollback Plan:

Differential Revision: D80020191

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D80020191

@mergify mergify bot added the v1 label Aug 11, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request extends KV cache sharing to support multiple attention groups per KV cache group, which is a great improvement for flexibility. The changes in vllm/v1/worker/utils.py correctly handle mapping layers to their respective attention groups for KV sharing. The new unit tests in tests/v1/test_kv_sharing.py are comprehensive and cover various scenarios, including different attention backends, same backends, and the absence of attention groups.

I've found one critical issue regarding an edge case where attn_groups could be an empty list, which would cause a crash. My review includes a suggestion to fix this. Otherwise, the implementation looks solid.

@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D80020191

@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D80020191

@sarckk sarckk requested a review from LucasWilkinson August 12, 2025 20:16
@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D80020191

Copy link
Collaborator

@LucasWilkinson LucasWilkinson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM; can you rebase off main to get a green CI

Having attn_groups as 2d array is getting a bit messy; we may want to re-think this but we can do that in a future PR if we decide to do something

Copy link
Collaborator

@heheda12345 heheda12345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D80020191

@sarckk sarckk added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 15, 2025
Summary:

vllm-project#21588 added support for multiple attention metadata builders per kv-cache spec.

As part of this change, each KV cache group now maps to one or more `AttentionGroup`, with one attention group being created for each type of attention backend used.

However, if we want to enable KV sharing when we have more than one attention group, we run into the following assertion:
```
            assert len(attn_groups[group_idx]) == 1, (
                "Only one attention group per KV cache group is supported "
                "for KV-cache sharing for now.")
```

This PR adds support to make this implementation more flexible, such that we can support KV cache sharing when there are multiple attention groups per KV cache group.

Test Plan:
new added unit test passes:
```
pytest tests/v1/test_kv_sharing.py
```

Rollback Plan:

Differential Revision: D80020191

Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
@heheda12345 heheda12345 merged commit 3e2f798 into vllm-project:main Aug 15, 2025
38 checks passed
666even666 pushed a commit to 666even666/vllm that referenced this pull request Aug 18, 2025
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: Yiwen Chen <yiwen66@berkeley.edu>
yiliu30 pushed a commit to yiliu30/vllm-fork that referenced this pull request Aug 19, 2025
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
divakar-amd pushed a commit to divakar-amd/vllm_upstream that referenced this pull request Aug 20, 2025
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
djmmoss pushed a commit to djmmoss/vllm that referenced this pull request Aug 21, 2025
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: Duncan Moss <djm.moss@gmail.com>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: Xiao Yu <xiao.yu@amd.com>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants