Skip to content

Conversation

@romancircus
Copy link

Summary

Strip reasoning blocks from message history when converting to model format, enabling seamless switching between reasoning and non-reasoning models.

Problem

When using a reasoning model (e.g., Claude with extended thinking) and then switching to Claude Code, the session fails with an "Invalid signature" error. This happens because Claude Code's API doesn't recognize the reasoning blocks that were added by the previous model.

Solution

Filter out reasoning type parts from messages in convertToModelMessages() before sending to the model API. This ensures compatibility regardless of which models were used earlier in the session.

Changes

  • packages/opencode/src/session/message-v2.ts - Filter reasoning parts in message conversion

Testing

  • Verified typecheck passes
  • Manual testing: Start session with reasoning model, switch to Claude Code, continue conversation without errors

When switching from a reasoning model (like Claude with extended thinking)
to Claude Code, the message history contained reasoning blocks that Claude
Code's API doesn't recognize, causing 'Invalid signature' errors.

Filter out reasoning parts from messages before converting to model format,
allowing seamless switching between reasoning and non-reasoning models.
@github-actions
Copy link
Contributor

github-actions bot commented Jan 6, 2026

The following comment was made by an LLM, it may be inaccurate:

Potential Related PRs Found:

  1. fix: strip incompatible thinking blocks when switching to Anthropic models #6748 - fix: strip incompatible thinking blocks when switching to Anthropic models

  2. Feature/OpenAI compatible reasoning #5531 - Feature/OpenAI compatible reasoning

  3. fix(session): prevent GPT-5.2 resume reasoning crash #6114 - fix(session): prevent GPT-5.2 resume reasoning crash

Recommendation: PR #6748 appears to be the most closely related - it may have already implemented a solution for a similar issue with thinking/reasoning blocks. You should check if that PR's approach can be unified with your fix, or if they should be consolidated.

@rekram1-node
Copy link
Collaborator

rekram1-node commented Jan 6, 2026

this will not work at all, namely any stateful reasoning model will completely break, not to mention that anthropic models DO require reasoning to be sent back during the current loop and I think this would error:

Screenshot 2026-01-05 at 10 52 26 PM

@romancircus
Copy link
Author

Thanks for the quick feedback @rekram1-node! You're absolutely right - stripping reasoning blocks unconditionally would break stateful reasoning models that need these blocks sent back during the conversation loop.

I see PR #6748 addresses this more correctly at the provider transform layer. I'll close this PR and defer to that approach.

Appreciate the review!

@romancircus romancircus closed this Jan 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants