Skip to content

Conversation

@heheda12345
Copy link
Collaborator

@heheda12345 heheda12345 commented Sep 17, 2025

Purpose

Hybrid allocator requires all layers have the same kv_hidden_size. But some model breaks this assumption, e.g.,

  1. spec decode, the MTP layer and the main model, like [Feature]: Support Eagle Draft Model with different number of KV heads #22432
  2. recent native sparse attention like minicpm 4.1. It needs to run attention.0 on a smaller KV cache to select tokens, and then do attention.1 only on the selected tokens. Both attention.0 and attention.1 can be regarded as full attention, but with different hidden size.

If the model only have one type of kv cache (e.g., all layers are full attention / all layers are sliding window with the same window size), KVCacheManager can regard this model as only having one layer of that type (i.e., only one KVCacheGroup with all layers in it).

This PR supports this kind of KVCacheGroup by introducing a UniformTypeKVCacheSpecs

The model runner is updated to support different kv_cache_specs within one group. The layers will be split to different attn_groups and have different attention metadata builder based on their kv_cache_spec.

Test Plan

Initialize the model with

llm = LLM(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    enforce_eager=True,
    speculative_config={"model": "yuhuili/EAGLE-mixtral-instruct-8x7B", "num_speculative_tokens": 3, "method":"eagle"},
    tensor_parallel_size=4
)

and run basic.py

Test Result

Can generate meaningful result

Fix #22432


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for models with varying hidden sizes across layers by adding UniformTypeKVCacheSpecs. This is a valuable feature, particularly for speculative decoding and certain sparse attention models. The changes are well-structured, with necessary refactoring in the model runner and the addition of relevant tests. I have identified one issue where a configuration option is not being applied in the new code path, which could lead to incorrect behavior.

Comment on lines +1017 to +1018
num_blocks = available_memory // kv_cache_groups[
0].kv_cache_spec.page_size_bytes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The num_gpu_blocks_override configuration is not being respected in this new code path for UniformTypeKVCacheSpecs. The calculation for num_blocks should use the get_num_blocks helper function, similar to the else branch, to ensure that user-provided overrides are applied correctly.

        page_size = kv_cache_groups[0].kv_cache_spec.page_size_bytes
        num_blocks = get_num_blocks(vllm_config,
                                    1,
                                    available_memory=available_memory,
                                    page_size=page_size)

attn_backend,
)

key = attn_backend.full_cls_name()
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@LucasWilkinson Why didn't we add kv_cache_spec as a key to split kv cache groups to attention groups?

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>

# All workers have the same kv_cache_config except layer names, so use
# an arbitrary one to initialize the scheduler.
assert all([
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this assert is in generate_scheduler_kv_cache_config now.

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
@heheda12345 heheda12345 added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 19, 2025
@heheda12345 heheda12345 merged commit 9607d5e into vllm-project:main Sep 20, 2025
54 checks passed
@heheda12345 heheda12345 deleted the different_hidden_size branch September 20, 2025 06:44
wangxiyuan pushed a commit to vllm-project/vllm-ascend that referenced this pull request Sep 22, 2025
### What this PR does / why we need it?
Follow up `UniformTypeKVCacheSpecs` changes introduced by
vllm-project/vllm#25101, which support different
hidden size in uniform type kvcache specs

This also fix the CI issue about `TypeError: AttentionGroup.__init__()
missing 1 required positional argument: 'kv_cache_spec'`

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
Tests passed with exsiting e2e tests.

- vLLM version: v0.10.2
- vLLM main:
vllm-project/vllm@c60e613

---------

Signed-off-by: MengqingCao <cmq0113@163.com>
Mercykid-bash pushed a commit to Mercykid-bash/vllm-ascend that referenced this pull request Sep 22, 2025
### What this PR does / why we need it?
Follow up `UniformTypeKVCacheSpecs` changes introduced by
vllm-project/vllm#25101, which support different
hidden size in uniform type kvcache specs

This also fix the CI issue about `TypeError: AttentionGroup.__init__()
missing 1 required positional argument: 'kv_cache_spec'`

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
Tests passed with exsiting e2e tests.

- vLLM version: v0.10.2
- vLLM main:
vllm-project/vllm@c60e613

---------

Signed-off-by: MengqingCao <cmq0113@163.com>
Signed-off-by: Che Ruan <cr623@ic.ac.uk>
Mercykid-bash pushed a commit to Mercykid-bash/vllm-ascend that referenced this pull request Sep 22, 2025
### What this PR does / why we need it?
Follow up `UniformTypeKVCacheSpecs` changes introduced by
vllm-project/vllm#25101, which support different
hidden size in uniform type kvcache specs

This also fix the CI issue about `TypeError: AttentionGroup.__init__()
missing 1 required positional argument: 'kv_cache_spec'`

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
Tests passed with exsiting e2e tests.

- vLLM version: v0.10.2
- vLLM main:
vllm-project/vllm@c60e613

---------

Signed-off-by: MengqingCao <cmq0113@163.com>
Signed-off-by: Che Ruan <cr623@ic.ac.uk>
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
charlifu pushed a commit to ROCm/vllm that referenced this pull request Sep 25, 2025
…llm-project#25101)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: charlifu <charlifu@amd.com>
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
…25101)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
…llm-project#25101)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
Angazenn pushed a commit to Angazenn/vllm-ascend that referenced this pull request Oct 21, 2025
### What this PR does / why we need it?
Follow up `UniformTypeKVCacheSpecs` changes introduced by
vllm-project/vllm#25101, which support different
hidden size in uniform type kvcache specs

This also fix the CI issue about `TypeError: AttentionGroup.__init__()
missing 1 required positional argument: 'kv_cache_spec'`

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
Tests passed with exsiting e2e tests.

- vLLM version: v0.10.2
- vLLM main:
vllm-project/vllm@c60e613

---------

Signed-off-by: MengqingCao <cmq0113@163.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
…llm-project#25101)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]: Support Eagle Draft Model with different number of KV heads

2 participants