-
Notifications
You must be signed in to change notification settings - Fork 4k
fix: omit malformed reasoning items to prevent OpenAI 400 errors #9362
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix: omit malformed reasoning items to prevent OpenAI 400 errors #9362
Conversation
Learn moreAll Green is an AI agent that automatically: ✅ Addresses code review comments ✅ Fixes failing CI checks ✅ Resolves merge conflicts |
Learn moreAll Green is an AI agent that automatically: ✅ Addresses code review comments ✅ Fixes failing CI checks ✅ Resolves merge conflicts |
|
All contributors have signed the CLA ✍️ ✅ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues found across 1 file
|
I have read the CLA Document and I hereby sign the CLA |
|
recheck |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="core/llm/openaiTypeConverters.ts">
<violation number="1" location="core/llm/openaiTypeConverters.ts:818">
P1: Using `"placeholder"` as `encrypted_content` is likely incorrect. The PR description says malformed reasoning items should be "skipped," but this code includes them with an invalid placeholder string. OpenAI's API expects valid encrypted content, so this could still cause 400 errors or unexpected behavior. Consider actually omitting the reasoning item when `encrypted` is falsy, similar to the original logic that returned early.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
recheck |
RomneyDa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pallaprolus if this only happens for some newer models, would it cause stripping of all thinking for older models/APIs which do not provide encrypted content?
Added some nitpick comments too
core/llm/openaiTypeConverters.ts
Outdated
| function serializeAssistantMessage( | ||
| msg: ChatMessage, | ||
| dropNextAssistantId: boolean, | ||
| pushMessage: (role: "assistant", content: string) => void, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
refactor to avoid passing this once-used callback
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactored callback → discriminated union { type: 'item' | 'skip', ... }
core/llm/openaiTypeConverters.ts
Outdated
| name: name || "", | ||
| arguments: typeof args === "string" ? args : "{}", | ||
| call_id: call_id, | ||
| } as any; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove any type cast
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed as any on empty item → { type: 'skip' }
core/llm/openaiTypeConverters.ts
Outdated
| if (id) { | ||
| if (!encrypted) { | ||
| // Return empty item signal and flag to drop next ID to prevent 400 error | ||
| return { item: {} as any, dropNextAssistantId: true }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove any type cast
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed as any on function call → ResponseFunctionToolCall has optional id
core/llm/openaiTypeConverters.ts
Outdated
| // BUT `ResponseInputItem` variants are specific. | ||
|
|
||
| // Alternative: If we are forced to drop the ID, maybe we just don't send the ID field? | ||
| // Let's try to return the object but cast to any to suppress TS if ID is mandatory but we want to test behavior. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's try to...
This comment feels too conversational, could you make it super consice to just describe behavior and why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, Condensed verbose comments → concise JSDoc
Fixes issue continuedev#9359 where OpenAI returns a 400 error when a reasoning item is provided without its required content (malformed). This change: 1. Skips sending reasoning items if they lack encrypted content. 2. Strips the ID from the subsequent assistant message when its corresponding reasoning item is skipped, preventing 'dangling reference' errors.
- Use look-ahead logic to check if next assistant message has responsesOutputItemId - Skip reasoning only when: no encrypted_content AND next message has reference - Keep reasoning when no reference (prevents breaking older models/APIs) - Refactor to use discriminated unions instead of callbacks - Remove as any casts - Add unit tests for toResponsesInput reasoning handling
cb00ecc to
c20d4c5
Compare
Hi RomneyDa, thank you for the review and detailed comments! You're right that the original fix would strip all reasoning for older models/APIs that don't provide encrypted_content. I've updated the approach to use look-ahead logic: Skip reasoning only when: no encrypted_content AND the next assistant message has a responsesOutputItemId reference We send a reasoning item without encrypted_content Also added unit tests for toResponsesInput covering these scenarios. |

Fixes #9359
Description
This PR addresses an issue where OpenAI models (specifically newer ones like o1-preview) return a 400 error when a
reasoningitem is provided without its requiredencrypted_content(malformed/missing).Changes
core/llm/openaiTypeConverters.ts, we now check ifencryptedcontent is present for a reasoning item. If not, the item is skipped.responsesOutputItemIdfrom the subsequentassistantmessage. This prevents the API from rejecting the request due to a message ID validating against a missing reasoning item.Verification
Summary by cubic
Prevents OpenAI 400 errors by skipping reasoning without encrypted_content only when the next assistant message references it. Strips the assistant responsesOutputItemId when paired reasoning is dropped and preserves tool call IDs.
Written for commit c20d4c5. Summary will update on new commits.