Skip to content

Conversation

dr75
Copy link
Contributor

@dr75 dr75 commented Apr 23, 2025

Context

Prefix caching in vLLM improves inference performance by reusing KV blocks across requests. However, this reuse introduces a potential privacy risk in shared environments, where an attacker could infer prompt reuse via timing side channels as demonstrated in Leaking Secrets from Prefix Caches.

To address this, we propose to isolate caches as described in an RFC: #16016

Suggested change

This PR implements the single barrier approach from the RFC by adding support for an optional cache_salt field in the request schema. When present, the salt is injected into the hash of the first block, ensuring that only requests with the same salt can share cached blocks. This effectively segments cache reuse by salt and protects against timing-based attacks.

{
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Here is a document with details about the world series: ..."},
    {"role": "user", "content": "Who won the world series in 2020?"}
  ],
  "cache_salt": "Z3V2bmV3aGxza3ZubGFoZ3Zud3V3ZWZ2bmd0b3V2bnZmc2xpZ3RoZ2x2aQ=="
}

The change is compatible with OpenAI requests as only an additional optional field is added. Users can still use the OpenAI client:

response = client.chat.completions.create(
    model=model,
    messages=messages,
    extra_body={
        "cache_salt": "Z3V2bmV3aGxza3ZubGFoZ3Zud3V3ZWZ2bmd0b3V2bnZmc2xpZ3RoZ2x2aQ==",
    },
)

The scope of cache sharing can be configured per request as needed, e.g., full single user protection or cache sharing within a group of users.

The change is in line with cache protection applied by other providers such as OpenAI and provides a solution that allows for higher flexibility allowing for more fine-grained configuration.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@dr75 dr75 changed the title Prevent side-channel attacks via cache salting [Core] Prevent side-channel attacks via cache salting Apr 23, 2025
@mergify mergify bot added documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) v1 labels Apr 23, 2025
@DarkLight1337 DarkLight1337 requested a review from russellb April 23, 2025 10:56
@russellb
Copy link
Member

I wonder how much this helps given that vLLM already initializes hashes with a random number that's different each time vLLM is executed. related: GHSA-rm76-4mrf-v9r8

@russellb
Copy link
Member

I wonder how much this helps given that vLLM already initializes hashes with a random number that's different each time vLLM is executed. related: GHSA-rm76-4mrf-v9r8

sorry, I hadn't read the paper yet!

Copy link
Member

@russellb russellb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think maintainers of the KV cache should review this, but I like this conceptually from a security perspective. Thank you!

@dr75
Copy link
Contributor Author

dr75 commented Apr 23, 2025

This is related too: #15297

@dr75
Copy link
Contributor Author

dr75 commented Apr 23, 2025

I think maintainers of the KV cache should review this, but I like this conceptually from a security perspective. Thank you!

Thanks! I am in touch with @comaniac for taking a look.

@dr75
Copy link
Contributor Author

dr75 commented Apr 23, 2025

Both CI failures (both failing a bit differently) are in EntrypointsTest, in PEFTHelper.from_local_dir() reading a file so I guess unrelated.

Copy link
Collaborator

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise LGTM. cc @WoosukKwon @ywang96

Comment on lines +386 to +387
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, we only include cache salt in the first block of a prompt? Is any particular reason not to include it to all blocks?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is propagated to all blocks via the hash of the previous block. So adding it to all would not improve it.

@dr75
Copy link
Contributor Author

dr75 commented Apr 23, 2025

One thing I just realized @comaniac: I added it only to the V1 engine but that's problematic for V0. In addition to a comment in the docs I guess there should be some error for requests with salt to the V0 engine, otherwise users of V0 will provide a salt but don't get feedback that it is not used.

Any suggestion? Or do you consider V0 deprecated and its good enough to only have a comment in the docs?

@comaniac
Copy link
Collaborator

One thing I just realized @comaniac: I added it only to the V1 engine but that's problematic for V0. In addition to a comment in the docs I guess there should be some error for requests with salt to the V0 engine, otherwise users of V0 will provide a salt but don't get feedback that it is not used.

Any suggestion? Or do you consider V0 deprecated and its good enough to only have a comment in the docs?

Yeah definitely we should error out when v0 engine receives the cache salt.

@russellb russellb added the security Security related issues and PRs label Apr 25, 2025
Copy link

mergify bot commented Apr 28, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @dr75.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Apr 28, 2025
@dr75
Copy link
Contributor Author

dr75 commented Apr 28, 2025

One thing I just realized @comaniac: I added it only to the V1 engine but that's problematic for V0. In addition to a comment in the docs I guess there should be some error for requests with salt to the V0 engine, otherwise users of V0 will provide a salt but don't get feedback that it is not used.
Any suggestion? Or do you consider V0 deprecated and its good enough to only have a comment in the docs?

Yeah definitely we should error out when v0 engine receives the cache salt.

Updated docs and return an error if the salt is used with V0.

dr75 added 4 commits April 30, 2025 06:06
Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) April 30, 2025 06:15
@DarkLight1337
Copy link
Member

Please fix pre-commit

@DarkLight1337
Copy link
Member

DarkLight1337 commented Apr 30, 2025

And try to avoid force push. Each time you do it I have to read the whole PR again

Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
auto-merge was automatically disabled April 30, 2025 06:20

Head branch was pushed to by a user without write access

@dr75
Copy link
Contributor Author

dr75 commented Apr 30, 2025

And try to avoid force push. Each time you do it I have to read the whole PR again

Sorry, had to rebase. You prefer merging?

Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
@DarkLight1337
Copy link
Member

And try to avoid force push. Each time you do it I have to read the whole PR again

Sorry, had to rebase. You prefer merging?

Yeah, we will squash the PR at the end anyway

@dr75
Copy link
Contributor Author

dr75 commented Apr 30, 2025

@DarkLight1337, seems good now. Shall we merge before it conflicts again?

@DarkLight1337 DarkLight1337 merged commit 77073c7 into vllm-project:main Apr 30, 2025
48 checks passed
@dr75
Copy link
Contributor Author

dr75 commented Apr 30, 2025

Thanks for your support @DarkLight1337, @comaniac, @russellb !

radeksm pushed a commit to radeksm/vllm that referenced this pull request May 2, 2025
…7045)

Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
…7045)

Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
…7045)

Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
yma11 pushed a commit to yma11/vllm that referenced this pull request Jun 4, 2025
…7045)

Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed security Security related issues and PRs v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants