-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
Closed as not planned
Labels
bugSomething isn't workingSomething isn't working
Description
Your current environment
The output of `python collect_env.py`
N/A.
I'm using docker latest:
docker pull vllm/vllm-openai:latest
latest: Pulling from vllm/vllm-openai
...
Digest: sha256:4d8d397a62c36237293a4d5e2acbf911b91b0a8552825bda69f581c5811af9ec
Status: Downloaded newer image for vllm/vllm-openai:latest
And I'm running as follows:
docker run --runtime nvidia --gpus all -v ~/.cache/huggingface:/root/.cache/huggingface --env "HUGGING_FACE_HUB_TOKEN=xxx" --env VLLM_LOGGING_LEVEL=DEBUG -p 8000:8000 --ipc=host vllm/vllm-openai:latest --model deepseek-ai/DeepSeek-R1-Distill-Llama-70B --enforce-eager --tensor-parallel-size=4 --enable-reasoning --reasoning-parser deepseek_r1
🐛 Describe the bug
I'm using the exact same example of Reasoning with Structured Outputs from the vLLM docs:
https://docs.vllm.ai/en/v0.8.1/features/reasoning_outputs.html#structured-output
I expect to see both reasoning and content. Something like this:
reasoning_content: "Hmm, let me think for a bit... Wait, let me think a bit more..."
content: {"name": "Ethan", "age": 28}
Instead I get:
reasoning_content: {"name": "Ethan", "age": 28}
content: None
(I also tried guided_grammar, although it's not documented as supported with reasoning in the top of the same page. With that case, it worked, but was unbearably slow, 100% of one CPU, and didn't utilise the GPU. Hence I'm trying guided_json, which is very fast, but doesn't work, as above.)
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working