diff --git a/documentation/docs/guides/config-files.md b/documentation/docs/guides/config-files.md index dccb79104d3c..6681fbd2954c 100644 --- a/documentation/docs/guides/config-files.md +++ b/documentation/docs/guides/config-files.md @@ -29,6 +29,7 @@ The following settings can be configured at the root level of your config.yaml f | `GOOSE_PROVIDER` | Primary [LLM provider](/docs/getting-started/providers) | "anthropic", "openai", etc. | None | Yes | | `GOOSE_MODEL` | Default model to use | Model name (e.g., "claude-3.5-sonnet", "gpt-4") | None | Yes | | `GOOSE_TEMPERATURE` | Model response randomness | Float between 0.0 and 1.0 | Model-specific | No | +| `GOOSE_MAX_TOKENS` | Maximum number of tokens for each model response (truncates longer responses) | Positive integer | Model-specific | No | | `GOOSE_MODE` | [Tool execution behavior](/docs/guides/goose-permissions) | "auto", "approve", "chat", "smart_approve" | "auto" | No | | `GOOSE_MAX_TURNS` | [Maximum number of turns](/docs/guides/sessions/smart-context-management#maximum-turns) allowed without user input | Integer (e.g., 10, 50, 100) | 1000 | No | | `GOOSE_LEAD_PROVIDER` | Provider for lead model in [lead/worker mode](/docs/guides/environment-variables#leadworker-model-configuration) | Same as `GOOSE_PROVIDER` options | Falls back to `GOOSE_PROVIDER` | No | @@ -43,8 +44,8 @@ The following settings can be configured at the root level of your config.yaml f | `GOOSE_ALLOWLIST` | URL for allowed extensions | Valid URL | None | No | | `GOOSE_RECIPE_GITHUB_REPO` | GitHub repository for recipes | Format: "org/repo" | None | No | | `GOOSE_AUTO_COMPACT_THRESHOLD` | Set the percentage threshold at which goose [automatically summarizes your session](/docs/guides/sessions/smart-context-management#automatic-compaction). | Float between 0.0 and 1.0 (disabled at 0.0)| 0.8 | No | -| `otel_exporter_otlp_endpoint` | OTLP endpoint URL for [observability](/docs/guides/environment-variables#opentelemetry-protocol-otlp) | URL (e.g., `http://localhost:4318`) | None | No | -| `otel_exporter_otlp_timeout` | Export timeout in milliseconds for [observability](/docs/guides/environment-variables#opentelemetry-protocol-otlp) | Integer (ms) | 10000 | No | +| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP endpoint URL for [observability](/docs/guides/environment-variables#opentelemetry-protocol-otlp) | URL (e.g., `http://localhost:4318`) | None | No | +| `OTEL_EXPORTER_OTLP_TIMEOUT` | Export timeout in milliseconds for [observability](/docs/guides/environment-variables#opentelemetry-protocol-otlp) | Integer (ms) | 10000 | No | | `SECURITY_PROMPT_ENABLED` | Enable [prompt injection detection](/docs/guides/security/prompt-injection-detection) to identify potentially harmful commands | true/false | false | No | | `SECURITY_PROMPT_THRESHOLD` | Sensitivity threshold for [prompt injection detection](/docs/guides/security/prompt-injection-detection) (higher = stricter) | Float between 0.01 and 1.0 | 0.7 | No | @@ -89,8 +90,8 @@ GOOSE_SEARCH_PATHS: - "/opt/homebrew/bin" # Observability (OpenTelemetry) -otel_exporter_otlp_endpoint: "http://localhost:4318" -otel_exporter_otlp_timeout: 20000 +OTEL_EXPORTER_OTLP_ENDPOINT: "http://localhost:4318" +OTEL_EXPORTER_OTLP_TIMEOUT: 20000 # Security Configuration SECURITY_PROMPT_ENABLED: true diff --git a/documentation/docs/guides/environment-variables.md b/documentation/docs/guides/environment-variables.md index 927aca4ec9fd..f54a1557a13c 100644 --- a/documentation/docs/guides/environment-variables.md +++ b/documentation/docs/guides/environment-variables.md @@ -19,14 +19,21 @@ These are the minimum required variables to get started with goose. | `GOOSE_PROVIDER` | Specifies the LLM provider to use | [See available providers](/docs/getting-started/providers#available-providers) | None (must be [configured](/docs/getting-started/providers#configure-provider-and-model)) | | `GOOSE_MODEL` | Specifies which model to use from the provider | Model name (e.g., "gpt-4", "claude-sonnet-4-20250514") | None (must be [configured](/docs/getting-started/providers#configure-provider-and-model)) | | `GOOSE_TEMPERATURE` | Sets the [temperature](https://medium.com/@kelseyywang/a-comprehensive-guide-to-llm-temperature-%EF%B8%8F-363a40bbc91f) for model responses | Float between 0.0 and 1.0 | Model-specific default | +| `GOOSE_MAX_TOKENS` | Sets the maximum number of tokens for each model response (truncates longer responses) | Positive integer (e.g., 4096, 8192) | Model-specific default | **Examples** ```bash # Basic model configuration export GOOSE_PROVIDER="anthropic" -export GOOSE_MODEL="claude-sonnet-4-20250514" +export GOOSE_MODEL="claude-sonnet-4-5-20250929" export GOOSE_TEMPERATURE=0.7 + +# Set a lower limit for shorter interactions +export GOOSE_MAX_TOKENS=4096 + +# Set a higher limit for tasks requiring longer output (e.g. code generation) +export GOOSE_MAX_TOKENS=16000 ``` ### Advanced Provider Configuration