Skip to content

Conversation

@heheda12345
Copy link
Collaborator

@heheda12345 heheda12345 commented Aug 21, 2025

Purpose

Fix #22403 (comment)

Test Plan

Try this request locally.

Test Result

Raise an error "Expected 2 output messages (reasoning and final), but got 48." before but now works.

(Optional) Documentation Update


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
@heheda12345 heheda12345 requested a review from aarnphm as a code owner August 21, 2025 07:40
@heheda12345
Copy link
Collaborator Author

@781574155 can you try to cherry-pick this PR and see whether this problem still exist?

@mergify mergify bot added frontend gpt-oss Related to GPT-OSS models labels Aug 21, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes a bug in chat completion for gpt-oss models where an error was raised when multiple reasoning messages were generated. The change correctly handles this by concatenating all reasoning messages. My review includes a suggestion to add a newline separator when joining these messages to improve the readability of the final reasoning content.

Comment on lines 344 to 345
reasoning_content = "".join(
[msg.content[0].text for msg in reasoning_msg])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The multiple reasoning messages are being joined without any separator. If these messages represent distinct thoughts or steps, this will result in a concatenated string that is hard to read and may be grammatically incorrect. For example, ["Thought one.", "Thought two."] would become "Thought one.Thought two.". It would be better to join them with a newline character to preserve the separation and improve readability.

Suggested change
reasoning_content = "".join(
[msg.content[0].text for msg in reasoning_msg])
reasoning_content = "\n".join(
[msg.content[0].text for msg in reasoning_msg])

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
@heheda12345 heheda12345 requested a review from WoosukKwon August 21, 2025 07:44
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@Ithanil
Copy link
Contributor

Ithanil commented Aug 21, 2025

Sorry for hijacking this, but since you are working on fixes for GPT-OSS/harmony utils, and I know my PRs will never be looked at, I want to bring your attention to the following fix/tweak: #23167 (and #23155 , but less important). I think enabling commentary channel only when native tool calls are expected is a substantial usability improvement in certain scenarios.

Sorry again, I just want to help improving the support of GPT-OSS in vllm. Feel free to just cherry-pick from my commits.

Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the fix

@WoosukKwon WoosukKwon added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 21, 2025
@WoosukKwon WoosukKwon merged commit 8a19303 into vllm-project:main Aug 21, 2025
13 of 14 checks passed
@heheda12345
Copy link
Collaborator Author

@Ithanil I've reviewed your PRs. Please CC me when you create new PRs related to gpt-oss.

djmmoss pushed a commit to djmmoss/vllm that referenced this pull request Aug 21, 2025
…llm-project#23318)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: Duncan Moss <djm.moss@gmail.com>
@harshakokel
Copy link

When can we expect a new release with this fix?

Xu-Wenqing pushed a commit to Xu-Wenqing/vllm that referenced this pull request Aug 23, 2025
…llm-project#23318)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: root <xwq391974@alibaba-inc.com>
@harshakokel
Copy link

Hi @simon-mo, any plans on pushing this fix to https://hub.docker.com/r/vllm/vllm-openai/tags ?

epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
…llm-project#23318)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: Xiao Yu <xiao.yu@amd.com>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
mengxingkongzhouhan pushed a commit to mengxingkongzhouhan/vllm that referenced this pull request Aug 30, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Sep 3, 2025
mkumatag pushed a commit to mkumatag/vllm-cpu that referenced this pull request Sep 23, 2025
…Output Message (#279)

Upstream PR: vllm-project/vllm#23318 

Closes: https://issues.redhat.com/browse/RHOAIENG-34181

Image build:
https://github.com/neuralmagic/nm-cicd/actions/runs/17739898673

Resulting image (not published yet, but expected):
quay.io/vllm/automation-vllm:cuda-17739898673
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend gpt-oss Related to GPT-OSS models ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: For GPT OSS 120B: Expected 2 output messages (reasoning and final), but got 7.

4 participants