Skip to content

Document gemini 3 thinking levels#7282

Merged
emma-squared merged 1 commit intomainfrom
docs/gemini-thinking
Feb 17, 2026
Merged

Document gemini 3 thinking levels#7282
emma-squared merged 1 commit intomainfrom
docs/gemini-thinking

Conversation

@emma-squared
Copy link
Contributor

Summary

Document the config settings and variables for Gemini 3 model thinking levels. #6585

Type of Change

  • Feature
  • Bug fix
  • Refactor / Code quality
  • Performance improvement
  • Documentation
  • Tests
  • Security fix
  • Build / Release
  • Other (specify below)

@emma-squared emma-squared requested a review from a team as a code owner February 17, 2026 22:20
@github-actions
Copy link
Contributor

github-actions bot commented Feb 17, 2026

PR Preview Action v1.8.1
Preview removed because the pull request was closed.
2026-02-17 22:46 UTC

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR documents the Gemini 3 thinking level configuration feature that was introduced in PR #6585. The feature adds support for configurable thinking levels ("low" and "high") for Gemini 3 models, allowing users to balance response latency with reasoning depth. The default is set to "low" for better latency.

Changes:

  • Added GEMINI3_THINKING_LEVEL environment variable documentation
  • Updated Gemini provider description to mention thinking levels support
  • Added comprehensive "Gemini 3 Thinking Levels" section explaining the feature for both CLI and Desktop interfaces

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.

File Description
documentation/docs/guides/environment-variables.md Added GEMINI3_THINKING_LEVEL variable to the Advanced Provider Configuration table and included an example showing how to configure thinking level via GOOSE_PREDEFINED_MODELS
documentation/docs/getting-started/providers.md Updated Gemini provider entry to mention thinking levels support and added a new section documenting how to configure thinking levels in both Desktop and CLI, including the priority order

@emma-squared emma-squared added this pull request to the merge queue Feb 17, 2026
Merged via the queue into main with commit 52415df Feb 17, 2026
27 checks passed
@emma-squared emma-squared deleted the docs/gemini-thinking branch February 17, 2026 22:44
jh-block added a commit that referenced this pull request Feb 18, 2026
* origin/main: (49 commits)
  chore: show important keys for provider configuration (#7265)
  fix: subrecipe relative path with summon (#7295)
  fix extension selector not displaying the correct enabled extensions (#7290)
  Use the working dir from the session (#7285)
  Fix: Minor logging uplift for debugging of prompt injection mitigation (#7195)
  feat(otel): make otel logging level configurable (#7271)
  docs: add documentation for Top Of Mind extension (#7283)
  Document gemini 3 thinking levels (#7282)
  docs: stream subagent tool calls (#7280)
  Docs: delete custom provider in desktop (#7279)
  Everything is streaming (#7247)
  openai: responses models and hardens event streaming handling (#6831)
  docs: disable ai session naming (#7194)
  Added cmd to validate bundled extensions json (#7217)
  working_dir usage more clear in add_extension (#6958)
  Use Canonical Models to set context window sizes (#6723)
  Set up direnv and update flake inputs (#6526)
  fix: restore subagent tool call notifications after summon refactor (#7243)
  fix(ui): preserve server config values on partial provider config save (#7248)
  fix(claude-code): allow goose to run inside a Claude Code session (#7232)
  ...
aharvard added a commit that referenced this pull request Feb 18, 2026
* origin/main:
  feat: add GOOSE_SUBAGENT_MODEL and GOOSE_SUBAGENT_PROVIDER config options (#7277)
  fix(openai): support "reasoning" field alias in streaming deltas (#7294)
  fix(ui): revert app-driven iframe width and send containerDimensions per ext-apps spec (#7300)
  New OpenAI event (#7301)
  ci: add fork guards to scheduled workflows (#7292)
  fix: allow ollama input limit override (#7281)
  chore: show important keys for provider configuration (#7265)
  fix: subrecipe relative path with summon (#7295)
  fix extension selector not displaying the correct enabled extensions (#7290)
  Use the working dir from the session (#7285)
  Fix: Minor logging uplift for debugging of prompt injection mitigation (#7195)
  feat(otel): make otel logging level configurable (#7271)
  docs: add documentation for Top Of Mind extension (#7283)
  Document gemini 3 thinking levels (#7282)
  docs: stream subagent tool calls (#7280)
  Docs: delete custom provider in desktop (#7279)

# Conflicts:
#	ui/desktop/src/components/McpApps/McpAppRenderer.tsx
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments