Skip to content

Fix speech local#7181

Merged
DOsinga merged 8 commits intomainfrom
fix-speech-local
Feb 12, 2026
Merged

Fix speech local#7181
DOsinga merged 8 commits intomainfrom
fix-speech-local

Conversation

@DOsinga
Copy link
Collaborator

@DOsinga DOsinga commented Feb 12, 2026

Summary

Stop repeated hallucinations

Copilot AI review requested due to automatic review settings February 12, 2026 16:52
}
}

// Trigger on: 3+ repeats of anything, or 2 repeats of 5+ token patterns
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like a pretty good heuristic

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aims to reduce repeated “hallucinated” dictation output for local Whisper transcription by limiting how much padded audio is processed and by truncating/cleaning repetitive outputs.

Changes:

  • Add extensive tracing around local Whisper model initialization, audio decoding, segmentation, and decoding.
  • Limit transcription to “actual” audio frames (vs. padded mel frames) and add token/text repetition mitigation (detect_repetition_impl, deduplicate_text).
  • Refactor dictation provider error handling to use map_err + logging in several places (replacing anyhow::Context).

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 8 comments.

File Description
crates/goose/src/dictation/whisper.rs Adjusts frame accounting to avoid padded audio, adds repetition detection/deduplication, and adds debug instrumentation.
crates/goose/src/dictation/providers.rs Changes local/provider transcription error handling to log and map_err instead of using anyhow::Context.

Comment on lines 985 to 997
if !resampled.is_empty() {
let max_abs = resampled.iter().map(|s| s.abs()).fold(0.0f32, f32::max);
let mean_abs = resampled.iter().map(|s| s.abs()).sum::<f32>() / resampled.len() as f32;
let rms = (resampled.iter().map(|s| s * s).sum::<f32>() / resampled.len() as f32).sqrt();
tracing::debug!(
output_samples = resampled.len(),
max_abs,
mean_abs,
rms,
"audio decoding complete with PCM stats"
);
} else {
tracing::debug!(output_samples = 0, "audio decoding complete (empty)");
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PCM stats (max_abs/mean_abs/rms) are computed on every decode even when debug logs are disabled, which can be expensive for longer audio buffers; compute these only when tracing::enabled!(tracing::Level::DEBUG) is true.

Suggested change
if !resampled.is_empty() {
let max_abs = resampled.iter().map(|s| s.abs()).fold(0.0f32, f32::max);
let mean_abs = resampled.iter().map(|s| s.abs()).sum::<f32>() / resampled.len() as f32;
let rms = (resampled.iter().map(|s| s * s).sum::<f32>() / resampled.len() as f32).sqrt();
tracing::debug!(
output_samples = resampled.len(),
max_abs,
mean_abs,
rms,
"audio decoding complete with PCM stats"
);
} else {
tracing::debug!(output_samples = 0, "audio decoding complete (empty)");
if tracing::enabled!(tracing::Level::DEBUG) {
if !resampled.is_empty() {
let max_abs = resampled.iter().map(|s| s.abs()).fold(0.0f32, f32::max);
let mean_abs = resampled.iter().map(|s| s.abs()).sum::<f32>() / resampled.len() as f32;
let rms =
(resampled.iter().map(|s| s * s).sum::<f32>() / resampled.len() as f32).sqrt();
tracing::debug!(
output_samples = resampled.len(),
max_abs,
mean_abs,
rms,
"audio decoding complete with PCM stats"
);
} else {
tracing::debug!(output_samples = 0, "audio decoding complete (empty)");
}

Copilot uses AI. Check for mistakes.
Comment on lines +141 to +144
let text = transcriber.transcribe(&audio_bytes).map_err(|e| {
tracing::error!("Transcription failed: {}", e);
e
})?;
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replacing anyhow::Context with map_err + logging drops useful error context for callers (and can lead to duplicate logging up the stack); prefer keeping .context("…") on these fallible calls and let the top-level handler decide if/where to log.

Copilot uses AI. Check for mistakes.
Comment on lines +159 to +162
let api_key = config.get_secret(def.config_key).map_err(|e| {
tracing::error!("{} not configured: {}", def.config_key, e);
anyhow::anyhow!("{} not configured", def.config_key)
})?;
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This map_err turns the config secret error into a new message and logs the underlying error, but it discards the original error as the source (making troubleshooting harder when only the returned error is surfaced); consider using .context/.with_context to preserve the cause instead of logging here.

Copilot uses AI. Check for mistakes.
Comment on lines 201 to +207
let part = reqwest::multipart::Part::bytes(audio_bytes)
.file_name(format!("audio.{}", extension))
.mime_str(mime_type)
.context("Failed to create multipart")?;
.map_err(|e| {
tracing::error!("Failed to create multipart: {}", e);
anyhow::anyhow!(e)
})?;
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This mime_str error handling replaces the previous .context("Failed to create multipart") with logging + anyhow!(e), which drops context from the returned error; consider restoring .context (and avoid logging here) so callers get an actionable error chain.

Copilot uses AI. Check for mistakes.
Comment on lines +237 to +240
let data: serde_json::Value = response.json().await.map_err(|e| {
tracing::error!("Failed to parse response: {}", e);
anyhow::anyhow!(e)
})?;
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This JSON parse error handling drops the previous .context("Failed to parse response"), making the returned error less informative when logs aren’t available; consider restoring .context/.with_context here instead of logging inline.

Copilot uses AI. Check for mistakes.
Comment on lines 423 to 437
let mel_flat = mel_segment.flatten_all()?;
let mel_mean: f32 = mel_flat.mean(0)?.to_scalar()?;
let mel_max: f32 = mel_flat.max(0)?.to_scalar()?;
let mel_min: f32 = mel_flat.min(0)?.to_scalar()?;
tracing::debug!(mel_mean, mel_max, mel_min, "mel segment statistics");

self.model.decoder.reset_kv_cache();
let audio_features = self.model.encoder.forward(&mel_segment, true)?;

// Debug: check encoder output statistics
let af_flat = audio_features.flatten_all()?;
let af_mean: f32 = af_flat.mean(0)?.to_scalar()?;
let af_max: f32 = af_flat.max(0)?.to_scalar()?;
let af_min: f32 = af_flat.min(0)?.to_scalar()?;
tracing::debug!(af_mean, af_max, af_min, "audio features statistics");
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mel/audio feature statistics are computed unconditionally (flatten/mean/max/min + to_scalar), which adds significant per-segment overhead even when debug logging is disabled; wrap these computations in a tracing::enabled!(tracing::Level::DEBUG) guard or remove them from the hot path.

Suggested change
let mel_flat = mel_segment.flatten_all()?;
let mel_mean: f32 = mel_flat.mean(0)?.to_scalar()?;
let mel_max: f32 = mel_flat.max(0)?.to_scalar()?;
let mel_min: f32 = mel_flat.min(0)?.to_scalar()?;
tracing::debug!(mel_mean, mel_max, mel_min, "mel segment statistics");
self.model.decoder.reset_kv_cache();
let audio_features = self.model.encoder.forward(&mel_segment, true)?;
// Debug: check encoder output statistics
let af_flat = audio_features.flatten_all()?;
let af_mean: f32 = af_flat.mean(0)?.to_scalar()?;
let af_max: f32 = af_flat.max(0)?.to_scalar()?;
let af_min: f32 = af_flat.min(0)?.to_scalar()?;
tracing::debug!(af_mean, af_max, af_min, "audio features statistics");
if tracing::enabled!(tracing::Level::DEBUG) {
let mel_flat = mel_segment.flatten_all()?;
let mel_mean: f32 = mel_flat.mean(0)?.to_scalar()?;
let mel_max: f32 = mel_flat.max(0)?.to_scalar()?;
let mel_min: f32 = mel_flat.min(0)?.to_scalar()?;
tracing::debug!(mel_mean, mel_max, mel_min, "mel segment statistics");
}
self.model.decoder.reset_kv_cache();
let audio_features = self.model.encoder.forward(&mel_segment, true)?;
// Debug: check encoder output statistics
if tracing::enabled!(tracing::Level::DEBUG) {
let af_flat = audio_features.flatten_all()?;
let af_mean: f32 = af_flat.mean(0)?.to_scalar()?;
let af_max: f32 = af_flat.max(0)?.to_scalar()?;
let af_min: f32 = af_flat.min(0)?.to_scalar()?;
tracing::debug!(af_mean, af_max, af_min, "audio features statistics");
}

Copilot uses AI. Check for mistakes.
sampled_tokens[sampled_tokens.len() - 2] >= TIMESTAMP_BEGIN
} else {
false
true
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

penultimate_was_timestamp defaults to true when there is only 1 sampled token, which makes the pairing rule treat a single timestamp as if it already had a preceding timestamp and forces non-timestamp output next; this should default to false so a lone timestamp triggers the "need a second timestamp" branch.

Suggested change
true
false

Copilot uses AI. Check for mistakes.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings February 12, 2026 17:03
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 9 comments.

Comment on lines 299 to 376
@@ -292,21 +335,65 @@ impl WhisperTranscriber {
while seek < content_frames {
segment_num += 1;
let segment_size = usize::min(content_frames - seek, N_FRAMES);
tracing::debug!(segment_num, segment_size, seek, "processing segment");

let segment_text_tokens =
self.process_segment(&mel_tensor, seek, segment_size, segment_num, num_segments)?;

tracing::debug!(
tokens_in_segment = segment_text_tokens.len(),
"segment produced tokens"
);
all_text_tokens.extend(segment_text_tokens);
seek += segment_size;
}

self.decode_tokens(&all_text_tokens)
tracing::debug!(
total_tokens = all_text_tokens.len(),
"decoding tokens to text"
);

if all_text_tokens.is_empty() {
tracing::warn!(
audio_bytes = audio_data.len(),
audio_duration_secs,
num_segments,
"no tokens produced from audio - possible silence or unrecognized speech"
);
return Ok(String::new());
}

let raw_result = self.decode_tokens(&all_text_tokens)?;
let result = deduplicate_text(&raw_result);
if result != raw_result {
tracing::debug!(
before_len = raw_result.len(),
after_len = result.len(),
"text-level deduplication removed repeated phrases"
);
}
tracing::debug!(result_len = result.len(), "transcription complete");
Ok(result)
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excessive debug logging in the transcription hot path. These logs fire on every transcription call and add significant noise. The audio statistics and segment processing logs are especially verbose. Consider keeping only the warning at line 357 and the final deduplication log at line 369, removing the rest.

Copilot generated this review using guidance from repository custom instructions.
Comment on lines 380 to 396
tracing::debug!(audio_bytes = audio_data.len(), "decoding audio to PCM");
let pcm_data = decode_audio_simple(audio_data)?;
let pcm_samples = pcm_data.len();
tracing::debug!(pcm_samples, "converting PCM to mel spectrogram");

// Calculate actual content frames from PCM length (HOP_LENGTH = 160)
// pcm_to_mel pads to 30 seconds, but we only want to process actual audio
let actual_content_frames = pcm_samples / 160;

let mel = audio::pcm_to_mel(&self.config, &pcm_data, &self.mel_filters);
let mel_len = mel.len();
tracing::debug!(
mel_len,
num_mel_bins = self.config.num_mel_bins,
actual_content_frames,
"creating mel tensor"
);
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Debug logging in prepare_audio_input is excessive. This function is called on every transcription. Keep logging minimal in the hot path - consider removing all but critical errors.

Copilot generated this review using guidance from repository custom instructions.
Comment on lines 385 to 387
// Calculate actual content frames from PCM length (HOP_LENGTH = 160)
// pcm_to_mel pads to 30 seconds, but we only want to process actual audio
let actual_content_frames = pcm_samples / 160;
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The calculation assumes HOP_LENGTH = 160, but this value isn't defined as a constant. If HOP_LENGTH changes in the audio module, this will break. Consider importing the constant or adding a comment explaining the dependency.

Suggested change
// Calculate actual content frames from PCM length (HOP_LENGTH = 160)
// pcm_to_mel pads to 30 seconds, but we only want to process actual audio
let actual_content_frames = pcm_samples / 160;
// Calculate actual content frames from PCM length using the same hop length as `pcm_to_mel`
// pcm_to_mel pads to 30 seconds, but we only want to process actual audio
let actual_content_frames = pcm_samples / audio::HOP_LENGTH as usize;

Copilot uses AI. Check for mistakes.
sampled_tokens[sampled_tokens.len() - 2] >= TIMESTAMP_BEGIN
} else {
false
true
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change from false to true modifies timestamp pairing behavior when there are fewer than 2 sampled tokens. With 1 token that's a timestamp, old code allowed only timestamps/EOT next (penultimate=false → lines 595-604), new code forbids timestamps (penultimate=true → lines 586-594). This forces text generation after an initial timestamp. While the comment at lines 614-616 suggests this prevents timestamp repetition, this is a subtle behavior change that should be validated with tests to ensure it doesn't break transcription for edge cases.

Suggested change
true
false

Copilot uses AI. Check for mistakes.
Comment on lines 206 to 257
@@ -220,6 +226,11 @@ impl WhisperTranscriber {
let model =
get_model(model_id).ok_or_else(|| anyhow::anyhow!("Unknown model: {}", model_id))?;
let config = model.config();
tracing::debug!(
num_mel_bins = config.num_mel_bins,
d_model = config.d_model,
"loaded model config"
);

let mel_bytes = match config.num_mel_bins {
80 => include_bytes!("whisper_data/melfilters.bytes").as_slice(),
@@ -231,14 +242,19 @@ impl WhisperTranscriber {
&mut &mel_bytes[..],
&mut mel_filters,
)?;
tracing::debug!(mel_filters_len = mel_filters.len(), "loaded mel filters");

tracing::debug!("loading GGUF model weights");
let vb = candle_transformers::quantized_var_builder::VarBuilder::from_gguf(
model_path_ref,
&device,
)?;
let model = m::quantized_model::Whisper::load(&vb, config.clone())?;
tracing::debug!("model weights loaded successfully");

tracing::debug!("loading tokenizer");
let tokenizer = Self::load_tokenizer(model_path_ref, Some(bundled_tokenizer))?;
tracing::debug!("tokenizer loaded successfully");
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excessive debug logging throughout initialization. The codebase needs less logging, not more. Consider removing most of these debug statements and keeping only critical error/warning logs. These detailed initialization logs are unlikely to be useful in production and add noise.

Copilot generated this review using guidance from repository custom instructions.
Comment on lines 421 to 505
@@ -374,15 +476,47 @@ impl WhisperTranscriber {

tokens.push(next_token);

if next_token == EOT_TOKEN || tokens.len() > self.config.max_target_positions {
if next_token == EOT_TOKEN {
tracing::debug!(tokens_generated = tokens.len() - 3, "EOT token received");
break;
}
if tokens.len() > self.config.max_target_positions {
tracing::debug!("max target positions reached");
break;
}

// Detect repeating patterns by looking for the current token earlier in the sequence
// and checking if the preceding tokens also match (i.e., a repeated phrase)
if let Some(truncate_at) = self.detect_repetition(&tokens) {
tracing::debug!(
truncate_at,
tokens_before = tokens.len(),
"repetition detected, truncating"
);
tokens.truncate(truncate_at);
break;
}
}

// Log all generated tokens for debugging
tracing::debug!(
all_tokens = ?&tokens[3..],
"all tokens generated in segment"
);
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excessive debug logging in process_segment hot path. The mel statistics logs (lines 422-427) and encoder output statistics (lines 432-437) execute on every audio segment and perform extra tensor computations. This adds computational overhead and log noise for debugging info that's rarely needed.

Copilot generated this review using guidance from repository custom instructions.
Comment on lines 893 to 998
@@ -621,11 +929,14 @@ fn decode_audio_simple(audio_data: &[u8]) -> Result<Vec<f32>> {
anyhow::bail!("No channel information in audio track (neither channels nor channel_layout)")
};

tracing::debug!(sample_rate, channels, "audio format detected");

let mut decoder = symphonia::default::get_codecs()
.make(&track.codec_params, &DecoderOptions::default())
.context("Failed to create audio decoder - please ensure browser sends WAV format audio")?;

let mut pcm_data = Vec::new();
let mut packet_count = 0;

loop {
let packet = match format.next_packet() {
@@ -641,6 +952,7 @@ fn decode_audio_simple(audio_data: &[u8]) -> Result<Vec<f32>> {
match decoder.decode(&packet) {
Ok(decoded) => {
pcm_data.extend(audio_buffer_to_f32(&decoded));
packet_count += 1;
}
Err(symphonia::core::errors::Error::DecodeError(_)) => {
continue;
@@ -649,18 +961,42 @@ fn decode_audio_simple(audio_data: &[u8]) -> Result<Vec<f32>> {
}
}

tracing::debug!(
packet_count,
pcm_samples = pcm_data.len(),
"decoded audio packets"
);

let mono_data = if channels > 1 {
tracing::debug!(channels, "converting to mono");
convert_to_mono(&pcm_data, channels)
} else {
pcm_data
};

let resampled = if sample_rate != 16000 {
tracing::debug!(from_rate = sample_rate, to_rate = 16000, "resampling audio");
resample_audio(&mono_data, sample_rate, 16000)?
} else {
mono_data
};

// Log PCM statistics to diagnose quiet/corrupt audio
if !resampled.is_empty() {
let max_abs = resampled.iter().map(|s| s.abs()).fold(0.0f32, f32::max);
let mean_abs = resampled.iter().map(|s| s.abs()).sum::<f32>() / resampled.len() as f32;
let rms = (resampled.iter().map(|s| s * s).sum::<f32>() / resampled.len() as f32).sqrt();
tracing::debug!(
output_samples = resampled.len(),
max_abs,
mean_abs,
rms,
"audio decoding complete with PCM stats"
);
} else {
tracing::debug!(output_samples = 0, "audio decoding complete (empty)");
}
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Debug logging in decode_audio_simple adds noise. This function is called on every transcription. The statistics calculations at lines 986-988 add computational overhead. Consider removing most logging here except critical errors.

Copilot generated this review using guidance from repository custom instructions.
Comment on lines 149 to 240
@@ -185,7 +181,10 @@ fn build_api_client(provider: DictationProvider) -> Result<ApiClient> {
DictationProvider::Local => anyhow::bail!("Local provider should not use API client"),
};

ApiClient::with_timeout(base_url, auth, REQUEST_TIMEOUT).context("Failed to create API client")
ApiClient::with_timeout(base_url, auth, REQUEST_TIMEOUT).map_err(|e| {
tracing::error!("Failed to create API client: {}", e);
e
})
}

pub async fn transcribe_with_provider(
@@ -202,7 +201,10 @@ pub async fn transcribe_with_provider(
let part = reqwest::multipart::Part::bytes(audio_bytes)
.file_name(format!("audio.{}", extension))
.mime_str(mime_type)
.context("Failed to create multipart")?;
.map_err(|e| {
tracing::error!("Failed to create multipart: {}", e);
anyhow::anyhow!(e)
})?;

let form = reqwest::multipart::Form::new()
.part("file", part)
@@ -212,7 +214,10 @@ pub async fn transcribe_with_provider(
.request(None, def.endpoint_path)
.multipart_post(form)
.await
.context("Request failed")?;
.map_err(|e| {
tracing::error!("Request failed: {}", e);
e
})?;

if !response.status().is_success() {
let status = response.status();
@@ -229,7 +234,10 @@ pub async fn transcribe_with_provider(
}
}

let data: serde_json::Value = response.json().await.context("Failed to parse response")?;
let data: serde_json::Value = response.json().await.map_err(|e| {
tracing::error!("Failed to parse response: {}", e);
anyhow::anyhow!(e)
})?;
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These error handling changes replace concise .context() calls with verbose map_err closures that log and then recreate the error. This adds duplicate logging throughout. The pattern at lines 149-152, 159-162, 184-187, 204-207, 217-220, 237-240 all follow this same problematic pattern. Revert to using .context() which is more idiomatic and avoids the duplicate logging.

Copilot generated this review using guidance from repository custom instructions.
// Timestamps shouldn't decrease; forbid timestamp tokens smaller than or equal to the last.
// When last_was_timestamp && !penultimate_was_timestamp, we just output an "end" timestamp
// after text, so we need to advance past it to prevent repeating.
let timestamp_last = timestamp_tokens.last().unwrap() + 1;
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The timestamp_last calculation was simplified to always use + 1, removing the conditional logic. The comment explains the reasoning (prevent repeating after outputting an "end" timestamp). However, this changes behavior when last_was_timestamp && !penultimate_was_timestamp - previously it wouldn't increment, now it always does. This should be tested to ensure it doesn't cause issues with timestamp generation.

Suggested change
let timestamp_last = timestamp_tokens.last().unwrap() + 1;
let mut timestamp_last = *timestamp_tokens.last().unwrap();
if last_was_timestamp && !penultimate_was_timestamp {
timestamp_last += 1;
}

Copilot uses AI. Check for mistakes.
@DOsinga DOsinga merged commit d2158fa into main Feb 12, 2026
18 of 19 checks passed
@DOsinga DOsinga deleted the fix-speech-local branch February 12, 2026 17:43
katzdave pushed a commit that referenced this pull request Feb 12, 2026
Co-authored-by: Douwe Osinga <douwe@squareup.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
tlongwell-block added a commit that referenced this pull request Feb 12, 2026
…provenance

* origin/main: (68 commits)
  Upgraded npm packages for latest security updates (#7183)
  docs: reasoning effort levels for Codex provider (#6798)
  Fix speech local (#7181)
  chore: add .gooseignore to .gitignore (#6826)
  Improve error message logging from electron (#7130)
  chore(deps): bump jsonwebtoken from 9.3.1 to 10.3.0 (#6924)
  docs: standalone mcp apps and apps extension (#6791)
  workflow: auto-update cli-commands on release (#6755)
  feat(apps): Integrate AppRenderer from @mcp-ui/client SDK (#7013)
  fix(MCP): decode resource content (#7155)
  feat: reasoning_content in API for reasoning models (#6322)
  Fix/configure add provider custom headers (#7157)
  fix: handle keyring fallback as success (#7177)
  Update process-wrap to 9.0.3 (9.0.2 is yanked) (#7176)
  feat: support extra field in chatcompletion tool_calls for gemini openai compat (#6184)
  fix: replace panic with proper error handling in get_tokenizer (#7175)
  Lifei/smoke test for developer (#7174)
  fix text editor view broken (#7167)
  docs: White label guide (#6857)
  Add PATH detection back to developer extension (#7161)
  ...

# Conflicts:
#	.github/workflows/nightly.yml
jh-block added a commit that referenced this pull request Feb 13, 2026
* origin/main: (21 commits)
  nit: show dir in title, and less... jank (#7138)
  feat(gemini-cli): use stream-json output and re-use session (#7118)
  chore(deps): bump qs from 6.14.1 to 6.14.2 in /documentation (#7191)
  Switch jsonwebtoken to use aws-lc-rs (already used by rustls) (#7189)
  chore(deps): bump qs from 6.14.1 to 6.14.2 in /evals/open-model-gym/mcp-harness (#7184)
  Add SLSA build provenance attestations to release workflows (#7097)
  fix save and run recipe not working (#7186)
  Upgraded npm packages for latest security updates (#7183)
  docs: reasoning effort levels for Codex provider (#6798)
  Fix speech local (#7181)
  chore: add .gooseignore to .gitignore (#6826)
  Improve error message logging from electron (#7130)
  chore(deps): bump jsonwebtoken from 9.3.1 to 10.3.0 (#6924)
  docs: standalone mcp apps and apps extension (#6791)
  workflow: auto-update cli-commands on release (#6755)
  feat(apps): Integrate AppRenderer from @mcp-ui/client SDK (#7013)
  fix(MCP): decode resource content (#7155)
  feat: reasoning_content in API for reasoning models (#6322)
  Fix/configure add provider custom headers (#7157)
  fix: handle keyring fallback as success (#7177)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants