openai: responses models and hardens event streaming handling#6831
Merged
katzdave merged 3 commits intoblock:mainfrom Feb 17, 2026
Merged
openai: responses models and hardens event streaming handling#6831katzdave merged 3 commits intoblock:mainfrom
katzdave merged 3 commits intoblock:mainfrom
Conversation
Extend model checks to include gpt-5.2-codex and gpt-5.2-pro. Signed-off-by: Yusuke Shimizu <stm1051212@gmail.com>
Contributor
There was a problem hiding this comment.
Pull request overview
This PR fixes a bug where OpenAI gpt-5.2-pro and gpt-5.2-codex models were not being routed through the Responses API, causing 404 errors when users tried to select these models.
Changes:
- Extended the
uses_responses_apifunction to includegpt-5.2-codexandgpt-5.2-promodel prefixes - Added comprehensive test coverage for the routing logic
Add parsing logic for responses stream events. Signed-off-by: Yusuke Shimizu <stm1051212@gmail.com>
beb82ca to
e09b3cd
Compare
DOsinga
approved these changes
Feb 17, 2026
* origin/main: (263 commits) working_dir usage more clear in add_extension (block#6958) Use Canonical Models to set context window sizes (block#6723) Set up direnv and update flake inputs (block#6526) fix: restore subagent tool call notifications after summon refactor (block#7243) fix(ui): preserve server config values on partial provider config save (block#7248) fix(claude-code): allow goose to run inside a Claude Code session (block#7232) fix(openai): route gpt-5 codex via responses and map base paths (block#7254) feat: add GoosePlatform to AgentConfig and MCP initialization (block#6931) Fix copied over (block#7270) feat(gemini-cli): add streaming support via stream-json events (block#7244) fix: filter models without tool support from recommended list (block#7198) fix(google): handle more thoughtSignature vagaries during streaming (block#7204) docs: playwright CLI skill tutorial (block#7261) install node in goose dir (block#7220) fix: relax test_basic_response assertion for providers returning reasoning_content (block#7249) fix: handle reasoning_content for Kimi/thinking models (block#7252) feat: sandboxing for macos (block#7197) fix(otel): use monotonic_counter prefix and support temporality env var (block#7234) Streaming markdown (block#7233) Improve compaction messages to enable better post-compaction agent behavior (block#7259) ... # Conflicts: # crates/goose/src/providers/openai.rs
jh-block
added a commit
that referenced
this pull request
Feb 18, 2026
* origin/main: (49 commits) chore: show important keys for provider configuration (#7265) fix: subrecipe relative path with summon (#7295) fix extension selector not displaying the correct enabled extensions (#7290) Use the working dir from the session (#7285) Fix: Minor logging uplift for debugging of prompt injection mitigation (#7195) feat(otel): make otel logging level configurable (#7271) docs: add documentation for Top Of Mind extension (#7283) Document gemini 3 thinking levels (#7282) docs: stream subagent tool calls (#7280) Docs: delete custom provider in desktop (#7279) Everything is streaming (#7247) openai: responses models and hardens event streaming handling (#6831) docs: disable ai session naming (#7194) Added cmd to validate bundled extensions json (#7217) working_dir usage more clear in add_extension (#6958) Use Canonical Models to set context window sizes (#6723) Set up direnv and update flake inputs (#6526) fix: restore subagent tool call notifications after summon refactor (#7243) fix(ui): preserve server config values on partial provider config save (#7248) fix(claude-code): allow goose to run inside a Claude Code session (#7232) ...
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Routes OpenAI
gpt-5.2-proandgpt-5.2-codexmodels through the Responses API by extending the model-name gate.Also hardens Responses API event streaming handling after observing frequent keepalive/unknown SSE events in
gpt-5.2-proruns::)keepalive)errorevents as failuresI considered implementing a capability-based or canonical-model routing strategy instead of adding another explicit prefix check. Since the long-term naming and versioning policy for OpenAI model IDs in this codepath is still unclear, this change is intentionally kept minimal and explicit to avoid incorrect routing for future models.
Type of Change
Testing
Manual Testing
Before (
main):Selecting
gpt-5.2-profails with a 404 because it is not recognized as a chat model.After (This PR):
Selecting
gpt-5.2-prosucceeds and saves the configuration.