Skip to content

Conversation

@hmellor
Copy link
Member

@hmellor hmellor commented Apr 16, 2025

- Add warning that request-level structured output backend selection is not supported in V1
- Improve the error so that it's more clear which value is which

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Member

@russellb russellb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for improving the message!

Who is the log message for though? The person setting the request-level backend is going to see the exception, either in a Python program using vllm as a library, or in the HTTP response from the API.

@hmellor
Copy link
Member Author

hmellor commented Apr 16, 2025

Who is the log message for though? The person setting the request-level backend is going to see the exception, either in a Python program using vllm as a library, or in the HTTP response from the API.

My thinking was:

  • A library user who sets the backend in the request will see the warning telling them to not do that
  • A library user who sets the backend to something different to the engine backend will see the warning and the error
  • A HTTP user who sets the backend to something different to the engine backend will see the error

@russellb
Copy link
Member

Who is the log message for though? The person setting the request-level backend is going to see the exception, either in a Python program using vllm as a library, or in the HTTP response from the API.

My thinking was:

  • A library user who sets the backend in the request will see the warning telling them to not do that
  • A library user who sets the backend to something different to the engine backend will see the warning and the error
  • A HTTP user who sets the backend to something different to the engine backend will see the error

Won't all of these cases see the exception (either as the exception, or the HTTP error response) ?

hmellor added 2 commits April 16, 2025 14:06
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
@hmellor
Copy link
Member Author

hmellor commented Apr 16, 2025

Won't all of these cases see the exception (either as the exception, or the HTTP error response) ?

I thought that if a user set the request structured output backend to the same as the engine structured output backend, they wouldn't see the error?

edit: from offline discussion we decided that the warning only helps offline users and makes server logs super noisy when the problem doesn't concern the server admin. I've removed the warning.

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
@hmellor hmellor changed the title Improve warning & error for structured output backend selection Improve error for structured output backend selection Apr 16, 2025
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Copy link
Member

@russellb russellb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@russellb russellb enabled auto-merge (squash) April 16, 2025 12:16
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 16, 2025
@russellb russellb merged commit 93e561e into vllm-project:main Apr 17, 2025
60 checks passed
@hmellor hmellor deleted the improve-warning branch April 17, 2025 07:33
lionelvillard pushed a commit to lionelvillard/vllm that referenced this pull request Apr 17, 2025
…6717)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Apr 21, 2025
…6717)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Yang Wang <elainewy@meta.com>
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Apr 29, 2025
…6717)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
…6717)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
adobrzyn pushed a commit to HabanaAI/vllm-fork that referenced this pull request Apr 30, 2025
…6717)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
…6717)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants