- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 10.9k
Support multiple attention groups for KV sharing #22672
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run  Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add  🚀 | 
| This pull request was exported from Phabricator. Differential Revision: D80020191 | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request extends KV cache sharing to support multiple attention groups per KV cache group, which is a great improvement for flexibility. The changes in vllm/v1/worker/utils.py correctly handle mapping layers to their respective attention groups for KV sharing. The new unit tests in tests/v1/test_kv_sharing.py are comprehensive and cover various scenarios, including different attention backends, same backends, and the absence of attention groups.
I've found one critical issue regarding an edge case where attn_groups could be an empty list, which would cause a crash. My review includes a suggestion to fix this. Otherwise, the implementation looks solid.
4ce5ae3    to
    00dde5c      
    Compare
  
    | This pull request was exported from Phabricator. Differential Revision: D80020191 | 
00dde5c    to
    d7636ec      
    Compare
  
    | This pull request was exported from Phabricator. Differential Revision: D80020191 | 
d7636ec    to
    2cca1c5      
    Compare
  
    | This pull request was exported from Phabricator. Differential Revision: D80020191 | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM; can you rebase off main to get a green CI
Having attn_groups as 2d array is getting a bit messy; we may want to re-think this but we can do that in a future PR if we decide to do something
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
2cca1c5    to
    c47963c      
    Compare
  
    c47963c    to
    c1349b0      
    Compare
  
    | This pull request was exported from Phabricator. Differential Revision: D80020191 | 
Summary: vllm-project#21588 added support for multiple attention metadata builders per kv-cache spec. As part of this change, each KV cache group now maps to one or more `AttentionGroup`, with one attention group being created for each type of attention backend used. However, if we want to enable KV sharing when we have more than one attention group, we run into the following assertion: ``` assert len(attn_groups[group_idx]) == 1, ( "Only one attention group per KV cache group is supported " "for KV-cache sharing for now.") ``` This PR adds support to make this implementation more flexible, such that we can support KV cache sharing when there are multiple attention groups per KV cache group. Test Plan: new added unit test passes: ``` pytest tests/v1/test_kv_sharing.py ``` Rollback Plan: Differential Revision: D80020191 Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
c1349b0    to
    91e4174      
    Compare
  
    Signed-off-by: Yong Hoon Shin <yhshin@meta.com> Signed-off-by: Yiwen Chen <yiwen66@berkeley.edu>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com> Signed-off-by: Duncan Moss <djm.moss@gmail.com>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com> Signed-off-by: Xiao Yu <xiao.yu@amd.com>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Summary:
#21588 added support for multiple attention metadata builders per kv-cache spec.
As part of this change, each KV cache group now maps to one or more
AttentionGroup, with one attention group being created for each type of attention backend used.However, if we want to enable KV sharing when we have more than one attention group, we run into the following assertion:
This PR adds support to make this implementation more flexible, such that we can support KV cache sharing when there are multiple attention groups per KV cache group.
Test Plan:
new added unit test passes:
Rollback Plan:
Differential Revision: D80020191