Skip to content

Everything is streaming#7247

Merged
DOsinga merged 27 commits intomainfrom
everything-is-streaming
Feb 17, 2026
Merged

Everything is streaming#7247
DOsinga merged 27 commits intomainfrom
everything-is-streaming

Conversation

@DOsinga
Copy link
Collaborator

@DOsinga DOsinga commented Feb 16, 2026

Summary

make stream() the required method to implement. provides a respond() default implementation based on that. requires a model config (/cc @katzdave ).

Douwe Osinga added 3 commits February 11, 2026 18:07
- Remove local generate_simple_session_description methods (moved to cli_common)
- Update stream() signatures to include model_config parameter
- Use cli_common helpers for session description requests
- Remove duplicate non-streaming stream() method in claude_code.rs
This commit completes the streaming consolidation refactoring by removing
the supports_streaming() method and conditional logic throughout the codebase.

Key changes:
- Removed supports_streaming() check in reply_parts.rs - always call stream() now
- Updated GitHub Copilot's stream() to internally handle both streaming and
  non-streaming models (checks GITHUB_COPILOT_STREAM_MODELS list)
- Removed supports_streaming() method from Provider trait and all implementations
- Fixed all test MockProviders to implement stream() instead of complete_with_model()
- Fixed test call sites to use new complete() signature with model_config parameter

All providers now implement only stream() as the primary method. Non-streaming
providers (like GitHub Copilot for certain models) wrap results with
stream_from_single_message() internally.

All 666 tests pass.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Copy link
Collaborator

@codefromthecrypt codefromthecrypt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds great

Copilot AI review requested due to automatic review settings February 16, 2026 13:16
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the Provider interface so streaming is the primary/required execution path, with complete() becoming a default helper that collects a stream, and updates provider implementations + call sites to pass an explicit ModelConfig.

Changes:

  • Make Provider::stream(&ModelConfig, ...) -> MessageStream the required provider entrypoint and implement complete() via stream collection.
  • Update all provider implementations to the new trait signature (wrapping non-streaming providers via stream_from_single_message).
  • Update key agent/CLI code paths to pass a ModelConfig explicitly when completing.

Reviewed changes

Copilot reviewed 34 out of 34 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
crates/goose/src/providers/base.rs Makes stream() mandatory, updates complete()/complete_fast(), and adds collect_stream() helper.
crates/goose/src/providers/anthropic.rs Removes non-streaming completion path; stream now takes &ModelConfig.
crates/goose/src/providers/bedrock.rs Converts provider to return MessageStream and wraps single-message responses.
crates/goose/src/providers/chatgpt_codex.rs Removes complete_with_model; stream now uses passed model_config.
crates/goose/src/providers/claude_code.rs Removes complete_with_model; stream now uses passed model_config.
crates/goose/src/providers/codex.rs Converts to stream() returning MessageStream (single-message wrapper for non-stream CLI).
crates/goose/src/providers/cursor_agent.rs Converts to stream() returning MessageStream (single-message wrapper for non-stream CLI).
crates/goose/src/providers/databricks.rs Removes non-streaming completion; stream uses OpenAI-compat streaming.
crates/goose/src/providers/gcpvertexai.rs Removes non-streaming completion; stream signature updated for &ModelConfig.
crates/goose/src/providers/gemini_cli.rs Converts to stream() returning MessageStream (single-message wrapper for non-stream CLI).
crates/goose/src/providers/githubcopilot.rs Moves streaming capability check into stream() and wraps non-stream models.
crates/goose/src/providers/google.rs Removes non-streaming completion; stream now takes &ModelConfig.
crates/goose/src/providers/lead_worker.rs Refactors wrapper provider to implement stream() under the new trait.
crates/goose/src/providers/litellm.rs Converts to stream() returning MessageStream (single-message wrapper).
crates/goose/src/providers/ollama.rs Removes non-streaming completion; stream now takes &ModelConfig; updates session naming call site.
crates/goose/src/providers/openai.rs Removes non-streaming completion; stream now takes &ModelConfig for both responses + chat APIs.
crates/goose/src/providers/openai_compatible.rs Removes non-streaming completion; stream now takes &ModelConfig.
crates/goose/src/providers/openrouter.rs Removes non-streaming completion; stream now takes &ModelConfig.
crates/goose/src/providers/provider_test.rs Updates provider configuration test to pass ModelConfig into complete().
crates/goose/src/providers/sagemaker_tgi.rs Converts to stream() returning MessageStream (single-message wrapper).
crates/goose/src/providers/snowflake.rs Converts to stream() returning MessageStream (single-message wrapper).
crates/goose/src/providers/tetrate.rs Removes non-streaming completion; stream now takes &ModelConfig.
crates/goose/src/providers/testprovider.rs Updates test provider to record/replay via stream collection and single-message streams.
crates/goose/src/providers/venice.rs Converts to stream() returning MessageStream (single-message wrapper).
crates/goose/src/agents/agent.rs Updates recipe-generation completion call to pass a captured ModelConfig.
crates/goose/src/agents/mcp_client.rs Updates MCP sampling handler to call the new complete(&ModelConfig, ...).
crates/goose/src/agents/reply_parts.rs Removes supports_streaming branching and always uses provider streaming path.
crates/goose/src/agents/platform_extensions/apps.rs Updates apps content generation to pass ModelConfig into complete().
crates/goose/src/context_mgmt/mod.rs Updates internal provider test mock to implement stream() returning MessageStream.
crates/goose/src/permission/permission_judge.rs Updates permission judge to pass ModelConfig into complete().
crates/goose/examples/databricks_oauth.rs Updates example to use the new provider completion API.
crates/goose/examples/image_tool.rs Updates example to use the new provider completion API.
crates/goose-cli/src/session/mod.rs Updates planner classification + reasoning path to pass ModelConfig into complete().
crates/goose-cli/src/commands/configure.rs Updates OpenRouter auth test to use the new complete(&ModelConfig, ...) signature.
Comments suppressed due to low confidence (4)

crates/goose/examples/image_tool.rs:71

  • This example still calls .complete(...) with the old argument list; Provider::complete now requires a leading &ModelConfig, so this won’t compile—fetch let model_config = provider.get_model_config() and pass &model_config before the session id/system/messages/tools.
        let (response, usage) = provider
            .complete(
                "",
                "You are a helpful assistant. Please describe any text you see in the image.",
                &messages,
                &[Tool::new("view_image", "View an image", input_schema)],
            )

crates/goose/examples/databricks_oauth.rs:24

  • This example still uses the pre-change .complete(session_id, ...) signature; Provider::complete now takes (&ModelConfig, session_id, ...), so update it to pass a model config from the provider (or a chosen config) before the session id.
    let (response, usage) = provider
        .complete(
            "",
            "You are a helpful assistant.",
            &[message],
            &[],
        )

crates/goose/src/providers/gcpvertexai.rs:631

  • model_config parameter is immediately shadowed by let model_config = self.get_model_config();, so the passed config is ignored and the parameter becomes unused (will fail -D warnings); use the provided model_config (or rename the local) instead of always pulling from self.
    async fn stream(
        &self,
        model_config: &ModelConfig,
        session_id: &str,
        system: &str,
        messages: &[Message],
        tools: &[Tool],
    ) -> Result<MessageStream, ProviderError> {
        let model_config = self.get_model_config();
        let (mut request, context) = create_request(&model_config, system, messages, tools)?;

crates/goose/src/providers/databricks.rs:291

  • create_request(&model_config, ...) is passing a &&ModelConfig (since model_config is already &ModelConfig), which will trigger Clippy’s needless_borrow under -D warnings; pass model_config directly.

        let mut payload =
            create_request(&model_config, system, messages, tools, &self.image_format)?;
        payload
            .as_object_mut()

Comment on lines 390 to 401
/// Complete with a specific model config.
async fn complete(
&self,
model_config: &ModelConfig,
session_id: &str,
system: &str,
messages: &[Message],
tools: &[Tool],
) -> Result<(Message, ProviderUsage), ProviderError> {
let model_config = self.get_model_config();
self.complete_with_model(Some(session_id), &model_config, system, messages, tools)
.await
let stream = self.stream(model_config, session_id, system, messages, tools).await?;
collect_stream(stream).await
}
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This API change makes Provider::complete require an explicit model_config, but there are still in-repo call sites using the old .complete(session_id, ...) signature (e.g. in crates/goose/tests/providers.rs), which will break compilation; update those callers to pass &provider.get_model_config() (or another chosen config).

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

session_opt is computed but never used, which will trip -D warnings in CI; remove it or use it consistently when calling the inner provider.

Suggested change
let session_opt = if session_id.is_empty() { None } else { Some(session_id) };

Copilot uses AI. Check for mistakes.
Comment on lines +238 to 245
let model_config = provider.get_model_config();
let (response, usage) = provider
.complete_with_model(
session_id.as_deref(),
&provider.get_model_config(),
.complete(
&model_config,
session_id.as_deref().unwrap_or(""),
system_prompt,
&provider_ready_messages,
&[],
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switching to complete() (which now collects from streaming) can yield a Message made of multiple partial Text chunks; the later code in this function still only returns response.content.first(), which will truncate output—build the MCP reply from the full accumulated text instead.

Copilot uses AI. Check for mistakes.
Comment on lines 355 to 357
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The let message = message; / let usage = provider_usage; shadowing is a no-op and will trigger Clippy’s shadow_same (CI runs clippy with -D warnings); remove these bindings and pass the existing variables directly.

Suggested change
let message = message;
let usage = provider_usage;
Ok(super::base::stream_from_single_message(message, usage))
Ok(super::base::stream_from_single_message(message, provider_usage))

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings February 16, 2026 17:16
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 34 out of 34 changed files in this pull request and generated 3 comments.

Comments suppressed due to low confidence (2)

crates/goose/examples/image_tool.rs:72

  • The parameters to complete() are in the wrong order. The first parameter should be model_config: &ModelConfig, but an empty string is being passed. It should be:
let model_config = provider.get_model_config();
provider.complete(
    &model_config,
    "",
    "You are a helpful assistant. Please describe any text you see in the image.",
    &messages,
    &[Tool::new("view_image", "View an image", input_schema)],
)
        let (response, usage) = provider
            .complete(
                "",
                "You are a helpful assistant. Please describe any text you see in the image.",
                &messages,
                &[Tool::new("view_image", "View an image", input_schema)],
            )
            .await?;

crates/goose/src/providers/gcpvertexai.rs:584

  • The model_config parameter is ignored (marked with underscore prefix), and the method uses self.get_model_config() instead. This defeats the purpose of passing model_config as a parameter, which is to allow callers to override the provider's default model configuration. The parameter should be used:
async fn stream(
    &self,
    model_config: &ModelConfig,  // Remove underscore
    session_id: &str,
    system: &str,
    messages: &[Message],
    tools: &[Tool],
) -> Result<MessageStream, ProviderError> {
    // Use the passed model_config instead of self.get_model_config()
    let (mut request, context) = create_request(model_config, system, messages, tools)?;
    // ...
    let mut log = RequestLog::start(model_config, &request)?;
    // ...
}
    async fn stream(
        &self,
        _model_config: &ModelConfig,
        session_id: &str,
        system: &str,
        messages: &[Message],
        tools: &[Tool],
    ) -> Result<MessageStream, ProviderError> {
        let model_config = self.get_model_config();
        let (mut request, context) = create_request(&model_config, system, messages, tools)?;

        if matches!(context.provider(), ModelProvider::Anthropic) {
            if let Some(obj) = request.as_object_mut() {
                obj.insert("stream".to_string(), Value::Bool(true));
            }
        }

        let mut log = RequestLog::start(&model_config, &request)?;

Comment on lines 414 to 418
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable _session_opt is created but never used. This appears to be leftover code from the refactoring where Option<&str> was changed to &str for session_id. Since the stream() method now takes &str directly, this conversion is unnecessary and should be removed.

Suggested change
let _session_opt = if session_id.is_empty() {
None
} else {
Some(session_id)
};

Copilot uses AI. Check for mistakes.
Comment on lines 361 to 363
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 361-362 contain unnecessary variable rebindings that serve no purpose. The variables message and usage are shadowed with themselves, which adds no value. These lines should be removed:

let provider_usage = ProviderUsage::new(model_name.to_string(), usage);
Ok(super::base::stream_from_single_message(message, provider_usage))
Suggested change
let message = message;
let usage = provider_usage;
Ok(super::base::stream_from_single_message(message, usage))
Ok(super::base::stream_from_single_message(message, provider_usage))

Copilot uses AI. Check for mistakes.
Comment on lines 18 to 20
let (response, usage) = provider
.complete_with_model(
None,
&provider.get_model_config(),
"You are a helpful assistant.",
&[message],
&[],
)
.complete("", "You are a helpful assistant.", &[message], &[])
.await?;
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parameters to complete() are in the wrong order. According to the trait definition in base.rs, the signature is:

async fn complete(
    &self,
    model_config: &ModelConfig,
    session_id: &str,
    system: &str,
    messages: &[Message],
    tools: &[Tool],
)

But this code passes an empty string as the first parameter where model_config should be. It should be:

let model_config = provider.get_model_config();
provider.complete(
    &model_config,
    "",
    "You are a helpful assistant.",
    &[message],
    &[],
)

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings February 16, 2026 20:09
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 34 out of 34 changed files in this pull request and generated 3 comments.

Comments suppressed due to low confidence (1)

crates/goose/src/providers/openrouter.rs:306

  • Previously OpenRouter requests added the user field derived from session_id (via create_request_based_on_model); the new stream path no longer injects it, so the session/user identifier will no longer be sent in the request body—if OpenRouter relies on this for attribution/rate-limiting, re-add it when session_id is non-empty.
        let mut payload = create_request(
            model_config,
            system,
            messages,
            tools,
            &ImageFormat::OpenAi,
            true,
        )?;

        if self.supports_cache_control().await {
            payload = update_request_for_anthropic(&payload);
        }

        if is_gemini_model(&model_config.model_name) {
            openrouter_format::add_reasoning_details_to_request(&mut payload, messages);
        }

        if let Some(obj) = payload.as_object_mut() {
            obj.insert("transforms".to_string(), json!(["middle-out"]));
        }

        let mut log = RequestLog::start(model_config, &payload)?;

        let response = self
            .with_retry(|| async {
                let resp = self
                    .api_client
                    .response_post(Some(session_id), "api/v1/chat/completions", &payload)
                    .await?;
                handle_status_openai_compat(resp).await

Comment on lines 650 to 658
if let Some(msg) = msg_opt {
final_message = Some(match final_message {
Some(mut prev) => {
// Merge messages by appending content
prev.content.extend(msg.content);
prev
}
None => msg,
});
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

collect_stream merges message chunks by extending prev.content, which will turn streamed text deltas into many MessageContent::Text entries; to preserve the previous complete semantics (typically a single combined text block), consider coalescing adjacent text/reasoning blocks while collecting.

Copilot uses AI. Check for mistakes.
Comment on lines 374 to 403
/// Base trait for AI providers (OpenAI, Anthropic, etc)
#[async_trait]
pub trait Provider: Send + Sync {
/// Get the name of this provider instance
fn get_name(&self) -> &str;

// Internal implementation of complete, used by complete_fast and complete
// Providers should override this to implement their actual completion logic
//
/// # Parameters
/// - `session_id`: Use `None` only for configuration or pre-session tasks.
async fn complete_with_model(
/// Primary streaming method that all providers must implement.
async fn stream(
&self,
session_id: Option<&str>,
model_config: &ModelConfig,
session_id: &str,
system: &str,
messages: &[Message],
tools: &[Tool],
) -> Result<(Message, ProviderUsage), ProviderError>;
) -> Result<MessageStream, ProviderError>;

// Default implementation: use the provider's configured model
/// Complete with a specific model config.
async fn complete(
&self,
model_config: &ModelConfig,
session_id: &str,
system: &str,
messages: &[Message],
tools: &[Tool],
) -> Result<(Message, ProviderUsage), ProviderError> {
let model_config = self.get_model_config();
self.complete_with_model(Some(session_id), &model_config, system, messages, tools)
.await
let stream = self
.stream(model_config, session_id, system, messages, tools)
.await?;
collect_stream(stream).await
}
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Provider trait now requires stream(model_config, session_id, ...) and changed the complete signature, but there are still implementations/callers in the repo that use the old complete(session_id, ...) / complete_with_model shape (e.g., integration tests under crates/goose/tests); these will not compile until they’re updated to implement stream and pass an explicit ModelConfig into complete.

Copilot uses AI. Check for mistakes.
Comment on lines 666 to 668
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

collect_stream currently errors unless the stream yields a ProviderUsage; some streaming formats (e.g., Google streaming only sets final_usage when token counts are present) can legitimately yield a full message but no usage, which will make Provider::complete fail—consider defaulting usage (and model) when missing, or requiring streams to always emit a usage value at least once.

Suggested change
match (final_message, final_usage) {
(Some(msg), Some(usage)) => Ok((msg, usage)),
_ => Err(ProviderError::ExecutionError(
match final_message {
Some(msg) => {
// Some providers may not emit usage for certain streams; default when missing.
let usage = final_usage.unwrap_or_default();
Ok((msg, usage))
}
None => Err(ProviderError::ExecutionError(

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings February 16, 2026 21:47
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 34 out of 34 changed files in this pull request and generated 4 comments.

}

// Next turn uses worker (will fail, but should retry with lead and succeed)
let model_config = provider.get_model_config();
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate model_config retrieval on consecutive lines. Line 618 already retrieves the model_config, so this second retrieval is unnecessary.

Suggested change
let model_config = provider.get_model_config();

Copilot uses AI. Check for mistakes.
assert!(!provider.is_in_fallback_mode().await); // Not in fallback mode

// Another turn - should still try worker first, then retry with lead
let model_config = provider.get_model_config();
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate model_config retrieval on consecutive lines. Line 618 already retrieves the model_config that can be reused.

Copilot uses AI. Check for mistakes.
assert!(provider.is_in_fallback_mode().await);

// One more fallback turn
let model_config = provider.get_model_config();
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate model_config retrieval on consecutive lines. Line 681 already retrieves the model_config that can be reused.

Suggested change
let model_config = provider.get_model_config();

Copilot uses AI. Check for mistakes.
async fn stream(
&self,
session_id: Option<&str>,
_model_config: &ModelConfig,
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The _model_config parameter is ignored in favor of getting the model config from the active provider. Consider renaming to _user_model_config or adding a comment explaining why it's ignored, as this could be confusing for callers who expect their model_config to be used.

Copilot uses AI. Check for mistakes.
@lifeizhou-ap
Copy link
Collaborator

The code is much simpler now with the change!

We might have to keep the non streaming version for anthropic, openai and ollama as they support from_custom_config which has supports_streaming configuration.

Another option could be: removing supports_streaming from custom config since most of the models are now supporting streaming. However users may lose flexibility to use some model that does not support streaming.

Copilot AI review requested due to automatic review settings February 17, 2026 14:30
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 39 out of 39 changed files in this pull request and generated 8 comments.

Comment on lines 1 to 512
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This documentation file describes a working directory cleanup that appears to be unrelated to the streaming migration described in the PR. The file analyzes changes from commits 9a01fcb and aa356bd about working directory handling via MCP metadata vs environment variables. This seems like leftover content from a different refactoring effort that should not be part of this "Everything is streaming" PR.

Copilot uses AI. Check for mistakes.
Comment on lines 1 to 225
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This implementation summary describes working directory refactoring that is unrelated to the streaming migration described in the PR. The file discusses removal of GOOSE_WORKING_DIR environment variable and memory extension changes, which don't align with the PR's stated purpose of making stream() the required method. This appears to be documentation from a separate refactoring that should not be included in this PR.

Copilot uses AI. Check for mistakes.
Comment on lines 1 to 336
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This final implementation summary also describes working directory refactoring (removing GOOSE_WORKING_DIR, implementing per-session memory isolation) that is unrelated to the streaming migration purpose of this PR. This appears to be documentation from a separate refactoring effort that should not be included in this "Everything is streaming" PR.

Copilot uses AI. Check for mistakes.
Comment on lines 1 to 39
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shell script is for finishing the streaming migration, but the script description indicates it's meant to help remove complete_with_model() from remaining providers. However, based on my review, all complete_with_model() methods have already been removed from all providers in this PR. This script is now obsolete and should either be removed or updated to reflect that the migration is complete.

Copilot uses AI. Check for mistakes.
Comment on lines 1 to 17
use super::api_client::{ApiClient, AuthMethod};
use super::base::{
ConfigKey, MessageStream, Provider, ProviderDef, ProviderMetadata, ProviderUsage, Usage,
};
use super::base::{ConfigKey, MessageStream, Provider, ProviderDef, ProviderMetadata};
use super::errors::ProviderError;
use super::openai_compatible::{
handle_response_openai_compat, handle_status_openai_compat, stream_openai_compat,
};
use super::openai_compatible::{handle_status_openai_compat, stream_openai_compat};
use super::retry::ProviderRetry;
use super::utils::{get_model, handle_response_google_compat, is_google_model, RequestLog};
use super::utils::RequestLog;
use crate::config::signup_tetrate::TETRATE_DEFAULT_MODEL;
use crate::conversation::message::Message;
use anyhow::Result;
use async_trait::async_trait;
use futures::future::BoxFuture;
use serde_json::Value;

use crate::model::ModelConfig;
use crate::providers::formats::openai::{create_request, get_usage, response_to_message};
use crate::providers::formats::openai::create_request;
use rmcp::model::Tool;

const TETRATE_PROVIDER_NAME: &str = "tetrate";
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The supports_streaming field is declared and initialized but never used. This field should be removed from the TetrateProvider struct and from initialization logic since the Provider trait no longer has a supports_streaming() method and the field is not used within the provider's implementation.

Copilot uses AI. Check for mistakes.
Updated test calls to use the new signature:
- complete() now takes model_config as first parameter
- Changed complete_with_model() to complete() (method removed)
- All tests now properly pass model_config parameter

All 678 tests passing.
Copy link
Collaborator

@katzdave katzdave left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice love it. Some readmes to delete.

Copilot AI review requested due to automatic review settings February 17, 2026 16:40
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 40 out of 40 changed files in this pull request and generated 4 comments.

payload["stream"] = serde_json::Value::Bool(true);
if Self::should_use_responses_api(&model_config.model_name, &self.base_path) {
let mut payload = create_responses_request(model_config, system, messages, tools)?;
payload["stream"] = serde_json::Value::Bool(self.supports_streaming);
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The supports_streaming field is being used but has been removed from provider structs as part of this migration. This should use a boolean literal true for the responses API streaming path, or the stream parameter should be set based on whether the model actually supports streaming.

Copilot uses AI. Check for mistakes.
tools,
&ImageFormat::OpenAi,
true,
self.supports_streaming,
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The supports_streaming field is being used but has been removed from provider structs. This should be replaced with true since streaming is now the default path for all providers.

Copilot uses AI. Check for mistakes.
api_client,
model,
supports_streaming: config.supports_streaming.unwrap_or(true),
supports_streaming,
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The supports_streaming field is being assigned but according to the PR description and migration status, this field should be removed from the provider struct. The validation for non-streaming mode (lines 109-114) should also be removed.

Copilot uses AI. Check for mistakes.
api_client,
model,
supports_streaming: config.supports_streaming.unwrap_or(true),
supports_streaming,
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The supports_streaming field is being assigned but should be removed from the provider struct according to the streaming migration. The validation that rejects non-streaming mode (lines 133-138) should also be removed.

Copilot uses AI. Check for mistakes.
Douwe Osinga and others added 5 commits February 17, 2026 17:52
These files were temporary documentation during the streaming migration.
Removed from git tracking to keep them local and untracked.
This is a personal direnv configuration file that should not be committed.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Resolved conflict in databricks.rs by removing re-added complete_with_model() method
to maintain streaming-only architecture.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Updated all mock providers in tests to implement stream() instead of
complete_with_model():
- agent.rs: MockToolProvider
- compaction.rs: MockCompactionProvider
- mcp_integration_test.rs: MockProvider
- session_id_propagation_test.rs: make_request() call
- tetrate_streaming.rs: all stream() calls (5 locations)
- goose-acp/src/server.rs: MockModelProvider

All tests now use the new Provider trait signature with model_config as
first parameter to stream().

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings February 17, 2026 17:36
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 42 out of 42 changed files in this pull request and generated no new comments.

Comments suppressed due to low confidence (1)

.envrc:1

  • The deletion of .envrc appears unrelated to this PR's purpose ("everything is streaming"). This file is typically used for directory-specific environment variable management with direnv. Consider whether this deletion was intentional or should be in a separate commit.

Updated mock server to return SSE streaming format instead of JSON.
The OpenAI provider defaults to streaming mode and expects:
  data: {"choices":[{"delta":{"content":"..."}}]}
  data: [DONE]

This fixes "Stream yielded no message" errors.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@DOsinga DOsinga added this pull request to the merge queue Feb 17, 2026
Merged via the queue into main with commit 9a91fcc Feb 17, 2026
20 checks passed
@DOsinga DOsinga deleted the everything-is-streaming branch February 17, 2026 18:58
zanesq added a commit that referenced this pull request Feb 17, 2026
…ions-fallback

* 'main' of github.com:block/goose:
  docs: stream subagent tool calls (#7280)
  Docs: delete custom provider in desktop (#7279)
  Everything is streaming (#7247)
  openai: responses models and hardens event streaming handling (#6831)
  docs: disable ai session naming (#7194)
rabi added a commit to rabi/goose that referenced this pull request Feb 18, 2026
Looks like block#7247 replaced the real streaming implementation with
execute_command + stream_from_single_message, which collects all
CLI output before emitting a single message. Restore the try_stream!
based implementation.

Change-Id: Iaf14c892326cdff2ec212665e475476323163221
Signed-off-by: rabi <ramishra@redhat.com>
jh-block added a commit that referenced this pull request Feb 18, 2026
* origin/main: (49 commits)
  chore: show important keys for provider configuration (#7265)
  fix: subrecipe relative path with summon (#7295)
  fix extension selector not displaying the correct enabled extensions (#7290)
  Use the working dir from the session (#7285)
  Fix: Minor logging uplift for debugging of prompt injection mitigation (#7195)
  feat(otel): make otel logging level configurable (#7271)
  docs: add documentation for Top Of Mind extension (#7283)
  Document gemini 3 thinking levels (#7282)
  docs: stream subagent tool calls (#7280)
  Docs: delete custom provider in desktop (#7279)
  Everything is streaming (#7247)
  openai: responses models and hardens event streaming handling (#6831)
  docs: disable ai session naming (#7194)
  Added cmd to validate bundled extensions json (#7217)
  working_dir usage more clear in add_extension (#6958)
  Use Canonical Models to set context window sizes (#6723)
  Set up direnv and update flake inputs (#6526)
  fix: restore subagent tool call notifications after summon refactor (#7243)
  fix(ui): preserve server config values on partial provider config save (#7248)
  fix(claude-code): allow goose to run inside a Claude Code session (#7232)
  ...
github-merge-queue bot pushed a commit that referenced this pull request Feb 18, 2026
Signed-off-by: rabi <ramishra@redhat.com>
jh-block added a commit that referenced this pull request Feb 18, 2026
* origin/main:
  docs: remove ALPHA_FEATURES flag from documentation (#7315)
  docs: escape variable syntax in recipes (#7314)
  docs: update OTel environment variable and config guides (#7221)
  docs: system proxy settings (#7311)
  docs: add Summon extension tutorial and update Skills references (#7310)
  docs: agent session id (#7289)
  fix(gemini-cli): restore streaming lost in #7247 (#7291)
  Update more instructions (#7305)
  feat: add Moonshot and Kimi Code declarative providers (#7304)
  fix(cli): handle Reasoning content and fix streaming thinking display (#7296)
  feat: add GOOSE_SUBAGENT_MODEL and GOOSE_SUBAGENT_PROVIDER config options (#7277)
  fix(openai): support "reasoning" field alias in streaming deltas (#7294)
  fix(ui): revert app-driven iframe width and send containerDimensions per ext-apps spec (#7300)
  New OpenAI event (#7301)
  ci: add fork guards to scheduled workflows (#7292)
michaelneale added a commit that referenced this pull request Feb 19, 2026
* main: (54 commits)
  docs: add monitoring subagent activity section (#7323)
  docs: document Desktop UI recipe editing for model/provider and extensions (#7327)
  docs: add CLAUDE_THINKING_BUDGET and CLAUDE_THINKING_ENABLED environm… (#7330)
  fix: display 'Code Mode' instead of 'code_execution' in CLI (#7321)
  docs: add Permission Policy documentation for MCP Apps (#7325)
  update RPI plan prompt (#7326)
  docs: add CLI syntax highlighting theme customization (#7324)
  fix(cli): replace shell-based update with native Rust implementation (#7148)
  docs: rename Code Execution extension to Code Mode extension (#7316)
  docs: remove ALPHA_FEATURES flag from documentation (#7315)
  docs: escape variable syntax in recipes (#7314)
  docs: update OTel environment variable and config guides (#7221)
  docs: system proxy settings (#7311)
  docs: add Summon extension tutorial and update Skills references (#7310)
  docs: agent session id (#7289)
  fix(gemini-cli): restore streaming lost in #7247 (#7291)
  Update more instructions (#7305)
  feat: add Moonshot and Kimi Code declarative providers (#7304)
  fix(cli): handle Reasoning content and fix streaming thinking display (#7296)
  feat: add GOOSE_SUBAGENT_MODEL and GOOSE_SUBAGENT_PROVIDER config options (#7277)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments