Skip to content

Conversation

@trongtrandp
Copy link

Summary

  • Add support for Claude's thinking.type = "adaptive" and output_config.effort parameter across all translators (Gemini, Gemini CLI, Antigravity, OpenAI, Codex)
  • adaptive without effort defaults to level "high"; enabled without budget_tokens preserves backward-compatible behavior per translator
  • Gemini-family translators use thinkingLevel instead of hardcoded budget (24576), letting ApplyThinking resolve actual budget from model config / user payload config
  • effort = "max" capped to "high" for OpenAI/Codex (only support low/medium/high)
  • Strip output_config when model doesn't support thinking

Changes

File Change
thinking/apply.go extractClaudeConfig handles adaptive + output_config.effort; added MapClaudeEffortToLevel helper
thinking/strip.go Strip output_config for Claude provider
antigravity_claude_request.go adaptive support + thinkingLevel instead of hardcoded budget
gemini_claude_request.go Same as antigravity (without request. prefix)
gemini-cli_claude_request.go Same as antigravity
openai_claude_request.go adaptive"high", enabled → auto; effort mapped + xhigh capped
codex_claude_request.go adaptive"high", enabled"medium"; effort mapped + xhigh capped

Test plan

  • Verify thinking.type = "adaptive" without effort defaults to high across all providers
  • Verify thinking.type = "enabled" without budget_tokens preserves existing behavior
  • Verify output_config.effort = "max" maps to xhigh for Gemini, capped to high for OpenAI/Codex
  • Verify output_config.effort overrides adaptive default when both present
  • Verify models that don't support thinking strip both thinking and output_config

🤖 Generated with Claude Code

Add support for Claude's `thinking.type = "adaptive"` and
`output_config.effort` parameter across all translators (Gemini,
Gemini CLI, Antigravity, OpenAI, Codex).

- adaptive without effort defaults to level "high"
- enabled without budget preserves backward-compatible behavior
- output_config.effort mapped via MapClaudeEffortToLevel
- effort "max" capped to "high" for OpenAI/Codex (only support low/medium/high)
- Gemini translators use thinkingLevel instead of hardcoded budget,
  letting ApplyThinking resolve actual budget from model config
- Strip output_config when model doesn't support thinking

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @trongtrandp, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the system's capability to handle advanced thinking configurations from Claude models by integrating support for the new adaptive thinking type and the output_config.effort parameter. The changes ensure consistent and intelligent translation of these parameters across various backend AI providers, including Gemini, OpenAI, and Codex, while maintaining backward compatibility and respecting provider-specific limitations on thinking effort levels.

Highlights

  • Adaptive Thinking Support: Added comprehensive support for Claude's new thinking.type = "adaptive" across all relevant translators (Gemini, Gemini CLI, Antigravity, OpenAI, Codex).
  • Output Config Effort Parameter: Implemented support for Claude's output_config.effort parameter, allowing users to specify the thinking effort level.
  • Default Behavior and Backward Compatibility: Configured adaptive thinking to default to a 'high' effort level if output_config.effort is not explicitly provided, and ensured enabled thinking without budget_tokens maintains existing backward-compatible behavior.
  • Gemini Translator Enhancements: Updated Gemini-family translators to utilize thinkingLevel instead of hardcoded budget values, enabling ApplyThinking to dynamically resolve the actual budget based on model configuration.
  • Effort Level Capping: Introduced logic to cap output_config.effort = "max" to a 'high' level for OpenAI and Codex translators, aligning with their supported thinking effort ranges.
  • Configuration Stripping: Extended the configuration stripping mechanism to also remove output_config when the target model does not support thinking, preventing unsupported parameters from being sent.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • internal/thinking/apply.go
    • Updated extractClaudeConfig to recognize adaptive thinking type and handle output_config.effort.
    • Added logic to default adaptive thinking to LevelHigh if output_config.effort is not specified.
    • Introduced MapClaudeEffortToLevel function to convert Claude's output_config.effort strings to internal ThinkingLevel strings.
  • internal/thinking/strip.go
    • Modified StripThinkingConfig to include output_config in the list of paths to strip for the Claude provider.
  • internal/translator/antigravity/claude/antigravity_claude_request.go
    • Updated ConvertClaudeRequestToAntigravity to recognize adaptive thinking type for interleaved thinking hints.
    • Modified thinking translation logic to handle adaptive type, default to high thinking level if no budget/effort is specified, and map output_config.effort to thinkingLevel.
  • internal/translator/codex/claude/codex_claude_request.go
    • Updated ConvertClaudeRequestToCodex to support adaptive thinking type, defaulting to high effort if no budget is provided.
    • Implemented mapping for output_config.effort to reasoning.effort, capping LevelXHigh to LevelHigh due to Codex limitations.
  • internal/translator/gemini-cli/claude/gemini-cli_claude_request.go
    • Added import for internal/thinking package.
    • Updated ConvertClaudeRequestToCLI to recognize adaptive thinking type, default to high thinking level if no budget/effort is specified, and map output_config.effort to thinkingLevel.
  • internal/translator/gemini/claude/gemini_claude_request.go
    • Added import for internal/thinking package.
    • Updated ConvertClaudeRequestToGemini to recognize adaptive thinking type, default to high thinking level if no budget/effort is specified, and map output_config.effort to thinkingLevel.
  • internal/translator/openai/claude/openai_claude_request.go
    • Updated ConvertClaudeRequestToOpenAI to support adaptive thinking type, defaulting to high effort if no budget is provided.
    • Implemented mapping for output_config.effort to reasoning_effort, capping LevelXHigh to LevelHigh due to OpenAI limitations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for Claude's adaptive thinking and output_config.effort parameter across various translators. The implementation is mostly correct, but I've identified a couple of areas for improvement. There's a recurring bug in both the codex and openai translators where the xhigh effort level isn't properly capped when derived from thinking.budget_tokens, which could lead to invalid requests. Additionally, there's a fair amount of duplicated code across the Gemini-family translators that could be refactored to improve maintainability. My specific comments provide detailed suggestions to address these issues.

Comment on lines 216 to +244
if thinkingConfig := rootResult.Get("thinking"); thinkingConfig.Exists() && thinkingConfig.IsObject() {
switch thinkingConfig.Get("type").String() {
case "enabled":
case "enabled", "adaptive":
if budgetTokens := thinkingConfig.Get("budget_tokens"); budgetTokens.Exists() {
budget := int(budgetTokens.Int())
if effort, ok := thinking.ConvertBudgetToLevel(budget); ok && effort != "" {
reasoningEffort = effort
}
} else if thinkingConfig.Get("type").String() == "adaptive" {
// "adaptive" without budget_tokens: default to "high"
reasoningEffort = "high"
}
// "enabled" without budget_tokens: keep default "medium"
case "disabled":
if effort, ok := thinking.ConvertBudgetToLevel(0); ok && effort != "" {
reasoningEffort = effort
}
}
}
// output_config.effort takes priority (Claude Opus 4.6+ adaptive thinking)
if effort := rootResult.Get("output_config.effort"); effort.Exists() && effort.String() != "" {
if mapped := thinking.MapClaudeEffortToLevel(effort.String()); mapped != "" {
// Cap xhigh to high — Codex only supports low/medium/high
if mapped == string(thinking.LevelXHigh) {
mapped = string(thinking.LevelHigh)
}
reasoningEffort = mapped
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There's a bug in how the reasoningEffort is determined and capped. The current logic only caps the xhigh effort level to high when it's derived from output_config.effort. However, thinking.ConvertBudgetToLevel can also return xhigh for a large budget_tokens value, which is not supported by the Codex provider. This would result in an invalid xhigh value being sent.

I suggest refactoring this block to determine the reasoningEffort from both thinking and output_config.effort first, and then apply the capping logic once at the end before setting the value. This will fix the bug and also make the code cleaner.

if thinkingConfig := rootResult.Get("thinking"); thinkingConfig.Exists() && thinkingConfig.IsObject() {
		thinkingType := thinkingConfig.Get("type").String()
		switch thinkingType {
		case "enabled", "adaptive":
			if budgetTokens := thinkingConfig.Get("budget_tokens"); budgetTokens.Exists() {
				budget := int(budgetTokens.Int())
				if effort, ok := thinking.ConvertBudgetToLevel(budget); ok && effort != "" {
					reasoningEffort = effort
				}
			} else if thinkingType == "adaptive" {
				reasoningEffort = "high"
			}
			// "enabled" without budget_tokens keeps default "medium"
		case "disabled":
			if effort, ok := thinking.ConvertBudgetToLevel(0); ok && effort != "" {
				reasoningEffort = effort
			}
		}
	}
	// output_config.effort takes priority (Claude Opus 4.6+ adaptive thinking)
	if effort := rootResult.Get("output_config.effort"); effort.Exists() && effort.String() != "" {
		if mapped := thinking.MapClaudeEffortToLevel(effort.String()); mapped != "" {
			reasoningEffort = mapped
		}
	}

	// Cap xhigh to high — Codex only supports low/medium/high
	if reasoningEffort == string(thinking.LevelXHigh) {
		reasoningEffort = string(thinking.LevelHigh)
	}

Comment on lines +382 to 406
// Map Anthropic thinking -> Gemini thinkingBudget/include_thoughts when type==enabled or adaptive
if t := gjson.GetBytes(rawJSON, "thinking"); enableThoughtTranslate && t.Exists() && t.IsObject() {
if t.Get("type").String() == "enabled" {
tType := t.Get("type").String()
if tType == "enabled" || tType == "adaptive" {
if b := t.Get("budget_tokens"); b.Exists() && b.Type == gjson.Number {
budget := int(b.Int())
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.thinkingBudget", budget)
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.includeThoughts", true)
} else {
// No budget_tokens: signal auto so ApplyThinking resolves from model config
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.thinkingBudget", -1)
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.includeThoughts", true)
// adaptive without budget_tokens defaults to level "high"
if tType == "adaptive" {
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.thinkingLevel", "high")
}
}
// output_config.effort overrides the default level — ApplyThinking converts to budget via model config
if effort := gjson.GetBytes(rawJSON, "output_config.effort"); effort.Exists() && effort.String() != "" {
if level := thinking.MapClaudeEffortToLevel(effort.String()); level != "" {
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.thinkingLevel", level)
}
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's significant code duplication for handling Claude's thinking parameters across the Gemini-family translators (antigravity, gemini-cli, and gemini). This logic is nearly identical in all three files, with the only major difference being the JSON path prefix for setting the configuration.

To improve maintainability and reduce redundancy, consider extracting this logic into a shared helper function within the internal/translator/gemini/common package. This function could take the raw JSON, the output buffer, the path prefix, and the enableThoughtTranslate flag as arguments.

This would centralize the logic, making future updates easier and less error-prone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant