-
Notifications
You must be signed in to change notification settings - Fork 180
support vLLM cache salting in prefix aware scorer #1646
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support vLLM cache salting in prefix aware scorer #1646
Conversation
✅ Deploy Preview for gateway-api-inference-extension ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
Hi @Frapschen. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/cc @liu-cong |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, mostly nits, just want to make sure the parsing aligns with vllm API.
// Prompt is the prompt that was sent in the request body. | ||
Prompt string `json:"prompt,omitempty"` | ||
// CacheSalt is parameters from the vLLM security feature. | ||
CacheSalt string `json:"cache_salt,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to confirm, did you test with both completion and chatcompletions request with vllm and make sure the parsing here works?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I have checked the request body definition in VLLM:
- ChatCompletionRequest: https://github.com/vllm-project/vllm/blob/392edee34a008af2453d936cb3cdbd97842984a7/vllm/entrypoints/openai/protocol.py#L425
- CompletionRequest: https://github.com/vllm-project/vllm/blob/392edee34a008af2453d936cb3cdbd97842984a7/vllm/entrypoints/openai/protocol.py#L1024
Both of them have cache_salt
. So I sent the below curl:
for completion:
curl -i ${IP}:${PORT}/v1/completions -H 'Content-Type: application/json' -d '{
"model": "food-review-1",
"prompt": "Write as if you were a critic: San Francisco",
"max_tokens": 100,
"cache_salt": "Z3V2bmV3aGxza3ZubGFoZ3Zud3V3ZWZ2bmd0b3V2bnZmc2xpZ3RoZ2x2aQ==",
"temperature": 0
}'
for chatcompletions:
curl -X POST -i ${IP}:${PORT}/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "food-review-1",
"max_tokens": 100,
"temperature": 0,
"cache_salt": "Z3V2bmV3aGxza3ZubGFoZ3Zud3V3ZWZ2bmd0b3V2bnZmc2xpZ3RoZ2x2aQ==",
"messages": [
{
"role": "developer",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Linux is said to be an open source kernel because "
}
]
}'
Co-authored-by: Cong Liu <conliu@google.com>
/ok-to-test |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ahg-g, Frapschen The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind feature
What this PR does / why we need it:
support vLLM cache salting in prefix aware scorer
Which issue(s) this PR fixes:
Fixes #1631
Does this PR introduce a user-facing change?: