Skip to content

openai: responses models and hardens event streaming handling#6831

Merged
katzdave merged 3 commits intoblock:mainfrom
YusukeShimizu:responses-gpt-5.2-x
Feb 17, 2026
Merged

openai: responses models and hardens event streaming handling#6831
katzdave merged 3 commits intoblock:mainfrom
YusukeShimizu:responses-gpt-5.2-x

Conversation

@YusukeShimizu
Copy link
Contributor

@YusukeShimizu YusukeShimizu commented Jan 30, 2026

Summary

Routes OpenAI gpt-5.2-pro and gpt-5.2-codex models through the Responses API by extending the model-name gate.

Also hardens Responses API event streaming handling after observing frequent keepalive/unknown SSE events in gpt-5.2-pro runs:

  • skips SSE comment/keepalive lines (lines starting with :)
  • ignores unknown stream event types (for example, keepalive)
  • continues to process known Responses events normally
  • still surfaces error events as failures

I considered implementing a capability-based or canonical-model routing strategy instead of adding another explicit prefix check. Since the long-term naming and versioning policy for OpenAI model IDs in this codepath is still unclear, this change is intentionally kept minimal and explicit to avoid incorrect routing for future models.

Type of Change

  • Feature
  • Bug fix
  • Refactor / Code quality
  • Performance improvement
  • Documentation
  • Tests
  • Security fix
  • Build / Release
  • Other (specify below)

Testing

Manual Testing

Before (main):
Selecting gpt-5.2-pro fails with a 404 because it is not recognized as a chat model.

◇  Select a model:
│  gpt-5.2-pro 

◇  Request failed: Resource not found (404): This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?

└  Failed to configure provider: init chat completion request with tool did not succeed.

After (This PR):
Selecting gpt-5.2-pro succeeds and saves the configuration.

◇  Select a model:
│  gpt-5.2-pro 

◒  Checking your configuration...                                               
└  Configuration saved successfully to /Users/bruwbird/.config/goose/config.yaml

Extend model checks to include gpt-5.2-codex and gpt-5.2-pro.

Signed-off-by: Yusuke Shimizu <stm1051212@gmail.com>
@YusukeShimizu YusukeShimizu marked this pull request as ready for review January 30, 2026 09:46
Copilot AI review requested due to automatic review settings January 30, 2026 09:46
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a bug where OpenAI gpt-5.2-pro and gpt-5.2-codex models were not being routed through the Responses API, causing 404 errors when users tried to select these models.

Changes:

  • Extended the uses_responses_api function to include gpt-5.2-codex and gpt-5.2-pro model prefixes
  • Added comprehensive test coverage for the routing logic

Add parsing logic for responses stream events.

Signed-off-by: Yusuke Shimizu <stm1051212@gmail.com>
@YusukeShimizu YusukeShimizu changed the title openai: responses API model checks and add tests openai: responses models and hardens event streaming handling Feb 12, 2026
Copy link
Collaborator

@DOsinga DOsinga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/cc @katzdave

* origin/main: (263 commits)
  working_dir usage more clear in add_extension (block#6958)
  Use Canonical Models to set context window sizes (block#6723)
  Set up direnv and update flake inputs (block#6526)
  fix: restore subagent tool call notifications after summon refactor (block#7243)
  fix(ui): preserve server config values on partial provider config save (block#7248)
  fix(claude-code): allow goose to run inside a Claude Code session (block#7232)
  fix(openai): route gpt-5 codex via responses and map base paths (block#7254)
  feat: add GoosePlatform to AgentConfig and MCP initialization (block#6931)
  Fix copied over (block#7270)
  feat(gemini-cli): add streaming support via stream-json events (block#7244)
  fix: filter models without tool support from recommended list (block#7198)
  fix(google): handle more thoughtSignature vagaries during streaming (block#7204)
  docs: playwright CLI skill tutorial (block#7261)
  install node in goose dir (block#7220)
  fix: relax test_basic_response assertion for providers returning reasoning_content (block#7249)
  fix: handle reasoning_content for Kimi/thinking models (block#7252)
  feat: sandboxing for macos (block#7197)
  fix(otel): use monotonic_counter prefix and support temporality env var (block#7234)
  Streaming markdown (block#7233)
  Improve compaction messages to enable better post-compaction agent behavior (block#7259)
  ...

# Conflicts:
#	crates/goose/src/providers/openai.rs
Copilot AI review requested due to automatic review settings February 17, 2026 18:16
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated no new comments.

@katzdave katzdave added this pull request to the merge queue Feb 17, 2026
Merged via the queue into block:main with commit 65d3a15 Feb 17, 2026
26 checks passed
zanesq added a commit that referenced this pull request Feb 17, 2026
…ions-fallback

* 'main' of github.com:block/goose:
  docs: stream subagent tool calls (#7280)
  Docs: delete custom provider in desktop (#7279)
  Everything is streaming (#7247)
  openai: responses models and hardens event streaming handling (#6831)
  docs: disable ai session naming (#7194)
jh-block added a commit that referenced this pull request Feb 18, 2026
* origin/main: (49 commits)
  chore: show important keys for provider configuration (#7265)
  fix: subrecipe relative path with summon (#7295)
  fix extension selector not displaying the correct enabled extensions (#7290)
  Use the working dir from the session (#7285)
  Fix: Minor logging uplift for debugging of prompt injection mitigation (#7195)
  feat(otel): make otel logging level configurable (#7271)
  docs: add documentation for Top Of Mind extension (#7283)
  Document gemini 3 thinking levels (#7282)
  docs: stream subagent tool calls (#7280)
  Docs: delete custom provider in desktop (#7279)
  Everything is streaming (#7247)
  openai: responses models and hardens event streaming handling (#6831)
  docs: disable ai session naming (#7194)
  Added cmd to validate bundled extensions json (#7217)
  working_dir usage more clear in add_extension (#6958)
  Use Canonical Models to set context window sizes (#6723)
  Set up direnv and update flake inputs (#6526)
  fix: restore subagent tool call notifications after summon refactor (#7243)
  fix(ui): preserve server config values on partial provider config save (#7248)
  fix(claude-code): allow goose to run inside a Claude Code session (#7232)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments