-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Roberta embedding #7969
Roberta embedding #7969
Conversation
…to tests/kernels/utils.py from vllm/utils.py
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these:
🚀 |
@@ -34,7 +34,7 @@ class PagedAttention: | |||
|
|||
@staticmethod | |||
def get_supported_head_sizes() -> List[int]: | |||
return [64, 80, 96, 112, 120, 128, 192, 256] | |||
return [32, 64, 80, 96, 112, 120, 128, 192, 256] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: It's strange that just adding another head size here makes the code run. Perhaps this is actually a silent failure and the actual kernel has to be added somewhere.
# Conflicts: # vllm/core/embedding_model_block_manager.py
Signed-off-by: Max de Bayser <maxdebayser@gmail.com>
Signed-off-by: Max de Bayser <maxdebayser@gmail.com>
Signed-off-by: Max de Bayser <maxdebayser@gmail.com>
# Conflicts: # vllm/inputs/data.py
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Closed in favor of #9387 |
This is a Draft PR based on PR #5447 to test Roberta embedding models.
To run cuda graphs have to be disabled because they aren't supported with encoder models
To test with the embeddings API: