Skip to content

Conversation

bolinfest
Copy link
Collaborator

@bolinfest bolinfest commented Jun 2, 2025

Previous to this PR, we always set reasoning when making a request using the Responses API:

reasoning: Some(Reasoning {
effort: "high",
summary: Some(Summary::Auto),
}),

Though if you tried to use the Rust CLI with --model gpt-4.1, this would fail with:

"Unsupported parameter: 'reasoning.effort' is not supported with this model."

We take a cue from the TypeScript CLI, which does a check on the model name:

if (this.model.startsWith("o") || this.model.startsWith("codex")) {
reasoning = { effort: this.config.reasoningEffort ?? "medium" };
reasoning.summary = "auto";
}

This PR does a similar check, though also adds support for the following config options:

model_reasoning_effort = "low" | "medium" | "high" | "none"
model_reasoning_summary = "auto" | "concise" | "detailed" | "none"

This way, if you have a model whose name happens to start with "o" (or "codex"?), you can set these to "none" to explicitly disable reasoning, if necessary. (That said, it seems unlikely anyone would use the Responses API with non-OpenAI models, but we provide an escape hatch, anyway.)

This PR also updates both the TUI and codex exec to show reasoning effort and reasoning summaries in the header.

@bolinfest bolinfest merged commit 0f3cc8f into main Jun 2, 2025
9 checks passed
@bolinfest bolinfest deleted the pr1199 branch June 2, 2025 23:01
@github-actions github-actions bot locked and limited conversation to collaborators Jun 2, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant