-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using openai o*-mini generates an invalid tool's parameters schema. #4662
Comments
Seems like structured outputs is activated by default for reasoning models: ai/packages/openai/src/openai-chat-language-model.ts Lines 59 to 64 in d0d13f9
OpenAI: https://platform.openai.com/docs/guides/structured-outputs#supported-schemas Vercel AI SDK: https://sdk.vercel.ai/providers/ai-sdk-providers/openai#structured-outputs |
You can opt out of structured outputs by changing the |
@lgrammel but as @edenstrom mentioned, |
@lgrammel can you explain why I'd prefer to keep the same default behaviour for all models. imho, the default behaviour should be |
@hopkins385 the goal is to move it to |
@lgrammel why? |
|
@lgrammel is there any docs/issue on why you want to move to have structured outputs enabled by default? Seems like that could be a bit of a footgun in some cases. Would love to read more about the reasoning behind that decision! |
@vitorbal the goal was always to make structured outputs enabled by default; for all models, because it enables more robust outputs. it only default to false for other models for backwards compat for now. |
Still not clear to me why structured outputs should be enabled by default as it is not the default behavior of the models. I'd propose to adhere to the default behavior of the model providers. Also we should consider that not all models and/or providers even support structured outputs. Can you share more details on why the ai sdk plans to defer from the defaults of all model providers? PS: structured outputs needs to be configured separately and is just useful for very specific use-cases, but not for the standard chat-completion, afaik. |
@hopkins385 contrast the 2 options:
that's why i think opt-out is preferrable |
By having the opt-out option you are assuming everyone prefers to configure a JSON schema and parse the output of the llm. Which means the main use-case is not chat-completion. But afaik this is not the case as the main use-case is just forwarding the tokens to the client (e.g. chat scenario). As openai writes in their docs: I think "features" should always be opt-in. |
@hopkins385 noted that you think that. also noted that you do not know that the ai sdk only sets structured outputs if you actually supply a schema |
@lgrammel constructive feedback please. As you can see/read the confusion is already there and I am not alone with it. |
Description
When using
o1-mini
oro3-mini
model with function calls,strict: true
is added vsgpt-4o
that does not havestrict
flag. This leads to bugs when optional parameters are provided on the schema.Same bug happens with
.default
or.nullable
. The root cause seems like when"strict": true
is added the schema needs to change its typings as described here:As a workaround you can create a OpenAI client that removes strict flag:
Surprisingly, when using gpt-4o strict flag is not added vs when
o*-mini
is used. So I detected this bug when migrating to o3-miniCode example
Reproduction and workaround here: https://gist.github.com/double-thinker/f60bde68cd5705a33288f2000eeec53d
AI provider
@ai-sdk/openai 1.1.9
Additional context
No response
The text was updated successfully, but these errors were encountered: