Skip to content

Conversation

@pallaprolus
Copy link

@pallaprolus pallaprolus commented Dec 31, 2025

Fixes #9359

Description

This PR addresses an issue where OpenAI models (specifically newer ones like o1-preview) return a 400 error when a reasoning item is provided without its required encrypted_content (malformed/missing).

Changes

  1. Omit Malformed Reasoning: In core/llm/openaiTypeConverters.ts, we now check if encrypted content is present for a reasoning item. If not, the item is skipped.
  2. Handle Dangling References: When a reasoning item is skipped, we flag this state and strip the responsesOutputItemId from the subsequent assistant message. This prevents the API from rejecting the request due to a message ID validating against a missing reasoning item.

Verification

  • Verified logic ensures that message and reasoning streams remain consistent with OpenAI's requirements for paired/encrypted content.

Summary by cubic

Prevents OpenAI 400 errors by skipping reasoning without encrypted_content only when the next assistant message references it. Strips the assistant responsesOutputItemId when paired reasoning is dropped and preserves tool call IDs.

  • Bug Fixes
    • Use look-ahead to skip reasoning only when the next assistant has responsesOutputItemId.
    • Preserve tool call IDs; strip assistant ID when its paired reasoning was dropped.

Written for commit c20d4c5. Summary will update on new commits.

@pallaprolus pallaprolus requested a review from a team as a code owner December 31, 2025 05:36
@pallaprolus pallaprolus requested review from sestinj and removed request for a team December 31, 2025 05:36
@continue-development-app
Copy link

All Green - Keep your PRs mergeable

Learn more

All Green is an AI agent that automatically:

✅ Addresses code review comments

✅ Fixes failing CI checks

✅ Resolves merge conflicts


Unsubscribe from All Green comments

@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Dec 31, 2025
@continue
Copy link
Contributor

continue bot commented Dec 31, 2025

All Green - Keep your PRs mergeable

Learn more

All Green is an AI agent that automatically:

✅ Addresses code review comments

✅ Fixes failing CI checks

✅ Resolves merge conflicts


Unsubscribe from All Green comments

@github-actions
Copy link

github-actions bot commented Dec 31, 2025

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

@pallaprolus
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

@pallaprolus
Copy link
Author

recheck

@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:S This PR changes 10-29 lines, ignoring generated files. labels Dec 31, 2025
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name="core/llm/openaiTypeConverters.ts">

<violation number="1" location="core/llm/openaiTypeConverters.ts:818">
P1: Using `&quot;placeholder&quot;` as `encrypted_content` is likely incorrect. The PR description says malformed reasoning items should be &quot;skipped,&quot; but this code includes them with an invalid placeholder string. OpenAI&#39;s API expects valid encrypted content, so this could still cause 400 errors or unexpected behavior. Consider actually omitting the reasoning item when `encrypted` is falsy, similar to the original logic that returned early.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@pallaprolus
Copy link
Author

recheck

Copy link
Collaborator

@RomneyDa RomneyDa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pallaprolus if this only happens for some newer models, would it cause stripping of all thinking for older models/APIs which do not provide encrypted content?

Added some nitpick comments too

function serializeAssistantMessage(
msg: ChatMessage,
dropNextAssistantId: boolean,
pushMessage: (role: "assistant", content: string) => void,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refactor to avoid passing this once-used callback

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactored callback → discriminated union { type: 'item' | 'skip', ... }

name: name || "",
arguments: typeof args === "string" ? args : "{}",
call_id: call_id,
} as any;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove any type cast

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed as any on empty item → { type: 'skip' }

if (id) {
if (!encrypted) {
// Return empty item signal and flag to drop next ID to prevent 400 error
return { item: {} as any, dropNextAssistantId: true };
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove any type cast

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed as any on function call → ResponseFunctionToolCall has optional id

// BUT `ResponseInputItem` variants are specific.

// Alternative: If we are forced to drop the ID, maybe we just don't send the ID field?
// Let's try to return the object but cast to any to suppress TS if ID is mandatory but we want to test behavior.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's try to...
This comment feels too conversational, could you make it super consice to just describe behavior and why?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, Condensed verbose comments → concise JSDoc

@github-project-automation github-project-automation bot moved this from Todo to In Progress in Issues and PRs Jan 6, 2026
Fixes issue continuedev#9359 where OpenAI returns a 400 error when a reasoning item
is provided without its required content (malformed).

This change:
1. Skips sending reasoning items if they lack encrypted content.
2. Strips the ID from the subsequent assistant message when its
   corresponding reasoning item is skipped, preventing 'dangling reference' errors.
- Use look-ahead logic to check if next assistant message has responsesOutputItemId
- Skip reasoning only when: no encrypted_content AND next message has reference
- Keep reasoning when no reference (prevents breaking older models/APIs)
- Refactor to use discriminated unions instead of callbacks
- Remove as any casts
- Add unit tests for toResponsesInput reasoning handling
@pallaprolus pallaprolus force-pushed the fix/openai-reasoning-400 branch from cb00ecc to c20d4c5 Compare January 7, 2026 01:40
@pallaprolus
Copy link
Author

@pallaprolus if this only happens for some newer models, would it cause stripping of all thinking for older models/APIs which do not provide encrypted content?

Added some nitpick comments too

Hi RomneyDa, thank you for the review and detailed comments!

You're right that the original fix would strip all reasoning for older models/APIs that don't provide encrypted_content.

I've updated the approach to use look-ahead logic:

Skip reasoning only when: no encrypted_content AND the next assistant message has a responsesOutputItemId reference
Keep reasoning when: encrypted_content is present, OR when the next assistant message has no responsesOutputItemId reference
The key insight is that the 400 error only occurs when:

We send a reasoning item without encrypted_content
AND the following assistant message references that reasoning via its output item ID
If there's no reference (e.g., older conversation formats), there's no risk of a 400 error, so we preserve the reasoning.

Also added unit tests for toResponsesInput covering these scenarios.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

Error: GPT-5 - 400

2 participants