Merged
Conversation
Contributor
|
dianed-square
approved these changes
Feb 17, 2026
Contributor
There was a problem hiding this comment.
Pull request overview
This PR documents the Gemini 3 thinking level configuration feature that was introduced in PR #6585. The feature adds support for configurable thinking levels ("low" and "high") for Gemini 3 models, allowing users to balance response latency with reasoning depth. The default is set to "low" for better latency.
Changes:
- Added
GEMINI3_THINKING_LEVELenvironment variable documentation - Updated Gemini provider description to mention thinking levels support
- Added comprehensive "Gemini 3 Thinking Levels" section explaining the feature for both CLI and Desktop interfaces
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| documentation/docs/guides/environment-variables.md | Added GEMINI3_THINKING_LEVEL variable to the Advanced Provider Configuration table and included an example showing how to configure thinking level via GOOSE_PREDEFINED_MODELS |
| documentation/docs/getting-started/providers.md | Updated Gemini provider entry to mention thinking levels support and added a new section documenting how to configure thinking levels in both Desktop and CLI, including the priority order |
jh-block
added a commit
that referenced
this pull request
Feb 18, 2026
* origin/main: (49 commits) chore: show important keys for provider configuration (#7265) fix: subrecipe relative path with summon (#7295) fix extension selector not displaying the correct enabled extensions (#7290) Use the working dir from the session (#7285) Fix: Minor logging uplift for debugging of prompt injection mitigation (#7195) feat(otel): make otel logging level configurable (#7271) docs: add documentation for Top Of Mind extension (#7283) Document gemini 3 thinking levels (#7282) docs: stream subagent tool calls (#7280) Docs: delete custom provider in desktop (#7279) Everything is streaming (#7247) openai: responses models and hardens event streaming handling (#6831) docs: disable ai session naming (#7194) Added cmd to validate bundled extensions json (#7217) working_dir usage more clear in add_extension (#6958) Use Canonical Models to set context window sizes (#6723) Set up direnv and update flake inputs (#6526) fix: restore subagent tool call notifications after summon refactor (#7243) fix(ui): preserve server config values on partial provider config save (#7248) fix(claude-code): allow goose to run inside a Claude Code session (#7232) ...
aharvard
added a commit
that referenced
this pull request
Feb 18, 2026
* origin/main: feat: add GOOSE_SUBAGENT_MODEL and GOOSE_SUBAGENT_PROVIDER config options (#7277) fix(openai): support "reasoning" field alias in streaming deltas (#7294) fix(ui): revert app-driven iframe width and send containerDimensions per ext-apps spec (#7300) New OpenAI event (#7301) ci: add fork guards to scheduled workflows (#7292) fix: allow ollama input limit override (#7281) chore: show important keys for provider configuration (#7265) fix: subrecipe relative path with summon (#7295) fix extension selector not displaying the correct enabled extensions (#7290) Use the working dir from the session (#7285) Fix: Minor logging uplift for debugging of prompt injection mitigation (#7195) feat(otel): make otel logging level configurable (#7271) docs: add documentation for Top Of Mind extension (#7283) Document gemini 3 thinking levels (#7282) docs: stream subagent tool calls (#7280) Docs: delete custom provider in desktop (#7279) # Conflicts: # ui/desktop/src/components/McpApps/McpAppRenderer.tsx
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Document the config settings and variables for Gemini 3 model thinking levels. #6585
Type of Change