Skip to content

docs: reasoning effort levels for Codex provider#6798

Merged
blackgirlbytes merged 1 commit intoblock:mainfrom
YusukeShimizu:codex-reasoning-levels-upstream
Feb 12, 2026
Merged

docs: reasoning effort levels for Codex provider#6798
blackgirlbytes merged 1 commit intoblock:mainfrom
YusukeShimizu:codex-reasoning-levels-upstream

Conversation

@YusukeShimizu
Copy link
Contributor

@YusukeShimizu YusukeShimizu commented Jan 29, 2026

xhigh reasoning effort level to Codex CLI documentation

related: #6522.

Summary

Type of Change

  • Feature
  • Bug fix
  • Refactor / Code quality
  • Performance improvement
  • Documentation
  • Tests
  • Security fix
  • Build / Release
  • Other (specify below)

Add 'xhigh' reasoning effort level to Codex CLI documentation

Signed-off-by: Yusuke Shimizu <stm1051212@gmail.com>
@YusukeShimizu YusukeShimizu marked this pull request as ready for review January 29, 2026 04:49
@YusukeShimizu YusukeShimizu requested a review from a team as a code owner January 29, 2026 04:49
Copilot AI review requested due to automatic review settings January 29, 2026 04:49
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates Codex CLI provider documentation to reflect the newly supported xhigh reasoning effort level and clarify model-specific support for none.

Changes:

  • Expand the Codex “Features” section to include xhigh and note conditional support for none.
  • Update the CODEX_REASONING_EFFORT environment variable documentation to include xhigh and mention none constraints.

**Features:**
- Uses OpenAI's GPT-5 series models (gpt-5.2-codex, gpt-5.2, gpt-5.1-codex-max, gpt-5.1-codex-mini)
- Configurable reasoning effort levels (low, medium, high)
- Configurable reasoning effort levels (`low`, `medium`, `high`, `xhigh`; `none` is only supported on non-codex models like `gpt-5.2`)
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This note about none being “only supported on non-codex models” is incomplete/inaccurate relative to the current provider behavior: none on *-codex models is treated as invalid and goose falls back to high with a warning, so the docs should explicitly describe the fallback to avoid user confusion.

Suggested change
- Configurable reasoning effort levels (`low`, `medium`, `high`, `xhigh`; `none` is only supported on non-codex models like `gpt-5.2`)
- Configurable reasoning effort levels (`low`, `medium`, `high`, `xhigh`; `none` is only supported on non-codex models like `gpt-5.2` and will be treated as invalid on `*-codex` models, where goose falls back to `high` with a warning)

Copilot uses AI. Check for mistakes.
| `GOOSE_MODEL` | Model to use (only known models are passed to CLI) | `gpt-5.2-codex` |
| `CODEX_COMMAND` | Path to the Codex CLI command | `codex` |
| `CODEX_REASONING_EFFORT` | Reasoning effort level: `low`, `medium`, or `high` | `high` |
| `CODEX_REASONING_EFFORT` | Reasoning effort level: `low`, `medium`, `high`, or `xhigh` (`none` is only supported on non-codex models like `gpt-5.2`) | `high` |
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description for CODEX_REASONING_EFFORT implies none is simply unsupported on codex models, but the implementation falls back to high (with a warning) when none is set for a *-codex model; documenting that fallback behavior here would make the config outcome unambiguous.

Suggested change
| `CODEX_REASONING_EFFORT` | Reasoning effort level: `low`, `medium`, `high`, or `xhigh` (`none` is only supported on non-codex models like `gpt-5.2`) | `high` |
| `CODEX_REASONING_EFFORT` | Reasoning effort level: `low`, `medium`, `high`, or `xhigh` (`none` is supported on non-codex models like `gpt-5.2`; on `*-codex` models it is accepted but falls back to `high` with a warning) | `high` |

Copilot uses AI. Check for mistakes.
@blackgirlbytes blackgirlbytes added this pull request to the merge queue Feb 12, 2026
Merged via the queue into block:main with commit dd98e87 Feb 12, 2026
24 of 25 checks passed
tlongwell-block added a commit that referenced this pull request Feb 12, 2026
…provenance

* origin/main: (68 commits)
  Upgraded npm packages for latest security updates (#7183)
  docs: reasoning effort levels for Codex provider (#6798)
  Fix speech local (#7181)
  chore: add .gooseignore to .gitignore (#6826)
  Improve error message logging from electron (#7130)
  chore(deps): bump jsonwebtoken from 9.3.1 to 10.3.0 (#6924)
  docs: standalone mcp apps and apps extension (#6791)
  workflow: auto-update cli-commands on release (#6755)
  feat(apps): Integrate AppRenderer from @mcp-ui/client SDK (#7013)
  fix(MCP): decode resource content (#7155)
  feat: reasoning_content in API for reasoning models (#6322)
  Fix/configure add provider custom headers (#7157)
  fix: handle keyring fallback as success (#7177)
  Update process-wrap to 9.0.3 (9.0.2 is yanked) (#7176)
  feat: support extra field in chatcompletion tool_calls for gemini openai compat (#6184)
  fix: replace panic with proper error handling in get_tokenizer (#7175)
  Lifei/smoke test for developer (#7174)
  fix text editor view broken (#7167)
  docs: White label guide (#6857)
  Add PATH detection back to developer extension (#7161)
  ...

# Conflicts:
#	.github/workflows/nightly.yml
jh-block added a commit that referenced this pull request Feb 13, 2026
* origin/main: (21 commits)
  nit: show dir in title, and less... jank (#7138)
  feat(gemini-cli): use stream-json output and re-use session (#7118)
  chore(deps): bump qs from 6.14.1 to 6.14.2 in /documentation (#7191)
  Switch jsonwebtoken to use aws-lc-rs (already used by rustls) (#7189)
  chore(deps): bump qs from 6.14.1 to 6.14.2 in /evals/open-model-gym/mcp-harness (#7184)
  Add SLSA build provenance attestations to release workflows (#7097)
  fix save and run recipe not working (#7186)
  Upgraded npm packages for latest security updates (#7183)
  docs: reasoning effort levels for Codex provider (#6798)
  Fix speech local (#7181)
  chore: add .gooseignore to .gitignore (#6826)
  Improve error message logging from electron (#7130)
  chore(deps): bump jsonwebtoken from 9.3.1 to 10.3.0 (#6924)
  docs: standalone mcp apps and apps extension (#6791)
  workflow: auto-update cli-commands on release (#6755)
  feat(apps): Integrate AppRenderer from @mcp-ui/client SDK (#7013)
  fix(MCP): decode resource content (#7155)
  feat: reasoning_content in API for reasoning models (#6322)
  Fix/configure add provider custom headers (#7157)
  fix: handle keyring fallback as success (#7177)
  ...
katzdave added a commit that referenced this pull request Feb 13, 2026
…ntext

* 'main' of github.com:block/goose:
  feat: add onFallbackRequest handler to McpAppRenderer (#7208)
  feat: add streaming support for Claude Code CLI provider (#6833)
  fix: The detected filetype is PLAIN_TEXT, but the provided filetype was HTML (#6885)
  Add prompts (#7212)
  Add testing instructions for speech to text (#7185)
  Diagnostic files copying (#7209)
  fix: allow concurrent tool execution within the same MCP extension (#7202)
  fix: handle missing arguments in MCP tool calls to prevent GUI crash (#7143)
  Filter Apps page to only show standalone Goose Apps (#6811)
  opt: use static for Regex (#7205)
  nit: show dir in title, and less... jank (#7138)
  feat(gemini-cli): use stream-json output and re-use session (#7118)
  chore(deps): bump qs from 6.14.1 to 6.14.2 in /documentation (#7191)
  Switch jsonwebtoken to use aws-lc-rs (already used by rustls) (#7189)
  chore(deps): bump qs from 6.14.1 to 6.14.2 in /evals/open-model-gym/mcp-harness (#7184)
  Add SLSA build provenance attestations to release workflows (#7097)
  fix save and run recipe not working (#7186)
  Upgraded npm packages for latest security updates (#7183)
  docs: reasoning effort levels for Codex provider (#6798)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants