-
-
Couldn't load subscription status.
- Fork 10.9k
[v1][core] Support for attention free models #20811
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 5 commits
b764c9d
5825ba4
fc86350
97c11e6
673aeb0
fb3ecfb
8e5dbee
2ee7087
19a7d70
b8f355e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -89,7 +89,7 @@ def __init__( | |
| self.prefix_cache_stats = PrefixCacheStats() if log_stats else None | ||
|
|
||
| self.block_size: Optional[int] = None | ||
| if self.enable_caching: | ||
| if self.enable_caching and len(kv_cache_config.kv_cache_groups) > 0: | ||
|
||
| assert len( | ||
| set(g.kv_cache_spec.block_size | ||
| for g in kv_cache_config.kv_cache_groups) | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -209,6 +209,9 @@ def determine_available_memory(self) -> int: | |
| You may limit the usage of GPU memory | ||
| by adjusting the `gpu_memory_utilization` parameter. | ||
| """ | ||
| if self.vllm_config.model_config.is_attention_free: | ||
| return 0 | ||
|
|
||
|
||
| torch.cuda.empty_cache() | ||
| torch.cuda.reset_peak_memory_stats() | ||
| GiB = lambda b: b / GiB_bytes | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this given prefix caching is disabled here for models that don’t use last pooling method?
https://github.com/maxdebayser/vllm/blob/221f013922c0c118b682d294755e69990b2c43ed/vllm/config.py#L4505
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without this check though you would not be able to disable attention for models that are not of the pooling type as prefix caching is enabled by default for all models except pooling ones.
See below:
vllm/vllm/engine/arg_utils.py
Lines 1620 to 1630 in 38efa28
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps the safest thing to do is to disable prefix-caching in
VllmConfig.__post_init__right away for any attention free models and then yes, we could just rely onenable_cachingas you suggest.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this comment?