We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
调用基于vllm部署的大模型openai接口时报错
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "[{'type': 'extra_forbidden', 'loc': ('body', 'stream_options'), 'msg': 'Extra inputs are not permitted', 'input': {}}]", 'type': 'BadRequestError', 'param': None, 'code': 400}
大致原因是多了一个参数stream_options,在源代码中注释掉该参数便可以正常请求
stream_options
v0.6.1
直接使用最新版vllm部署模型,并以openai接口启动
None
The text was updated successfully, but these errors were encountered:
好的,这里我们看一下。
Sorry, something went wrong.
solved in 0.6.2
mushenL
No branches or pull requests
Initial Checks
What happened + What you expected to happen
调用基于vllm部署的大模型openai接口时报错
大致原因是多了一个参数
stream_options
,在源代码中注释掉该参数便可以正常请求Versions / Dependencies
v0.6.1
Reproduction script
直接使用最新版vllm部署模型,并以openai接口启动
Issue Severity
None
The text was updated successfully, but these errors were encountered: