Conversation
📝 WalkthroughWalkthroughThis PR migrates local STT servers to an actor-based model (ractor), updates error handling for TranscribeService (String errors, test wrapper via HandleError), adds lifecycle/supervision/shutdown handling to the listener, restructures local-stt APIs and health checks via actors/registry, and updates dependencies (adds ractor, bumps backon). Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant App as App
participant Listener as ListenerActor
participant Registry as ractor::registry
participant LSTT as Local STT Ext
participant Int as InternalSTTActor
participant Ext as ExternalSTTActor
App->>Listener: start()
Listener->>LSTT: start_server(...)
LSTT->>Int: Actor::spawn(InternalSTTArgs)
LSTT->>Ext: Actor::spawn(ExternalSTTArgs)
LSTT->>Registry: register(Int, Ext)
Note over Int,Ext: Actors initialize and become available
Listener->>LSTT: get_servers()
LSTT->>Int: GetHealth
Int-->>LSTT: (base_url, Ready)
LSTT->>Ext: GetHealth
Ext-->>LSTT: (base_url, Health|Error)
LSTT-->>Listener: Servers {internal, external}
Note right of Listener: Processes audio events<br/>until shutdown
App-->>Listener: stop (supervision/shutdown)
Listener->>LSTT: stop_server()
LSTT->>Registry: lookup(Int/Ext)
LSTT->>Int: stop_and_wait
LSTT->>Ext: stop_and_wait
Int-->>LSTT: stopped
Ext-->>LSTT: stopped
sequenceDiagram
autonumber
participant Test as Test (axum)
participant Router as axum Router
participant HE as HandleError
participant TS as TranscribeService
Test->>Router: request(from_realtime_audio)
Router->>HE: call
HE->>TS: call
TS-->>HE: Result<..., Err(String)>
HE-->>Router: Map Err(String) -> StatusCode 500
Router-->>Test: HTTP 500 on error
sequenceDiagram
autonumber
participant Ext as ExternalSTTActor
participant Sidecar as STT Sidecar Process
participant Backon as backon::retry
participant Client as hypr_am::Client
Ext->>Sidecar: spawn()
Ext->>Backon: retry InitRequest until OK
Backon->>Client: init_request()
Client-->>Backon: OK or Err
Backon-->>Ext: init OK or retries exhausted
Ext->>Client: status()
Client-->>Ext: Health
Note over Ext: handle(GetHealth)-> reply (base_url, health)
Sidecar-->>Ext: Terminated/Error
Ext->>Ext: handle(ProcessTerminated) -> update state
Ext-->Sidecar: kill on post_stop
sequenceDiagram
autonumber
participant Int as InternalSTTActor
participant Whisper as TranscribeService
participant Http as axum server
Int->>Whisper: build service/router (CORS)
Int->>Http: bind & serve (graceful)
Note over Int: handle(GetHealth) -> (base_url, Ready)
Int->>Http: signal shutdown on post_stop
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
plugins/local-stt/src/ext.rs (2)
286-305: Propagate actor spawn errors instead of unwrapping.
Actor::spawn(...).awaitalready returns aResult. If spawning fails,.unwrap()will bring down the whole process. Please propagate the error back to the caller.For example:
- let (server, _) = Actor::spawn( + let (server, _) = Actor::spawn( Some(internal::InternalSTTActor::name()), internal::InternalSTTActor, internal::InternalSTTArgs { model_cache_dir: cache_dir, model_type: whisper_model, }, - ) - .await - .unwrap(); + ) + .await?;
314-379: External server path has the same panic hazards.Every
.unwrap()on the spawn and health check can crash the app. Please mirror the internal path fix: propagate errors and use explicit durations.plugins/local-stt/src/lib.rs (1)
18-28: Danger:Statestill referenced with removed fields.
Stateno longer carriesinternal_server/external_server, butext::start_internal()andext::start_external()still try to read/write those fields, leading to build failures (no field 'internal_server' on type 'State'). Please drop the obsolete mutations or re-home the handles before removing the fields.- state.internal_server = Some(handle.clone()); + // store the handle wherever the new actor flow expects it, + // or remove this assignment entirely if no longer needed.
🧹 Nitpick comments (1)
plugins/local-stt/src/server/external.rs (1)
117-144: Drop the API key once initialization succeedsWe only need the API key during initialization. Leaving it in
state.api_keykeeps sensitive material around for the full actor lifetime and risks accidental logging or reuse.Apply this diff to clear the key after a successful init:
tracing::info!(res = ?res); + state.api_key = None; Ok(())
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (11)
Cargo.toml(2 hunks)crates/transcribe-whisper-local/src/lib.rs(3 hunks)crates/transcribe-whisper-local/src/service/streaming.rs(1 hunks)plugins/listener/Cargo.toml(1 hunks)plugins/listener/src/actors/listener.rs(3 hunks)plugins/local-stt/Cargo.toml(1 hunks)plugins/local-stt/src/ext.rs(9 hunks)plugins/local-stt/src/lib.rs(1 hunks)plugins/local-stt/src/server/external.rs(1 hunks)plugins/local-stt/src/server/internal.rs(1 hunks)plugins/local-stt/src/server/mod.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit configuration file
**/*.{js,ts,tsx,rs}: 1. Do not add any error handling. Keep the existing one.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/local-stt/src/server/mod.rscrates/transcribe-whisper-local/src/service/streaming.rscrates/transcribe-whisper-local/src/lib.rsplugins/local-stt/src/server/internal.rsplugins/local-stt/src/server/external.rsplugins/local-stt/src/lib.rsplugins/local-stt/src/ext.rsplugins/listener/src/actors/listener.rs
🧬 Code graph analysis (5)
crates/transcribe-whisper-local/src/lib.rs (1)
crates/transcribe-whisper-local/src/service/streaming.rs (2)
builder(33-35)model_path(45-48)
plugins/local-stt/src/server/internal.rs (3)
plugins/listener/src/actors/listener.rs (5)
tokio(106-106)name(35-37)pre_start(45-53)post_stop(55-62)handle(64-76)plugins/local-stt/src/server/external.rs (4)
name(33-35)pre_start(43-111)post_stop(146-166)handle(168-196)crates/transcribe-whisper-local/src/service/streaming.rs (2)
model_path(45-48)builder(33-35)
plugins/local-stt/src/server/external.rs (4)
plugins/listener/src/actors/listener.rs (5)
tokio(106-106)name(35-37)pre_start(45-53)post_stop(55-62)handle(64-76)plugins/local-stt/src/server/internal.rs (4)
name(33-35)pre_start(43-93)post_stop(95-104)handle(106-124)crates/am/src/client.rs (3)
new(11-16)new(99-106)status(25-34)crates/host/src/lib.rs (1)
kill_processes_by_matcher(39-61)
plugins/local-stt/src/ext.rs (2)
plugins/local-stt/src/server/external.rs (1)
name(33-35)plugins/local-stt/src/server/internal.rs (1)
name(33-35)
plugins/listener/src/actors/listener.rs (5)
plugins/local-stt/src/server/external.rs (1)
post_stop(146-166)plugins/local-stt/src/server/internal.rs (1)
post_stop(95-104)plugins/listener/src/actors/source.rs (1)
post_stop(175-188)plugins/listener/src/actors/session.rs (2)
post_stop(241-272)handle_supervisor_evt(191-239)plugins/listener/src/actors/recorder.rs (1)
post_stop(153-168)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-14)
🔇 Additional comments (8)
plugins/listener/Cargo.toml (1)
64-64: Workspace alignment looks good.Switching
ractorto the workspace dependency keeps this crate in sync with the shared version. 👍crates/transcribe-whisper-local/src/service/streaming.rs (1)
65-67: Confirm downstream assumptions about the new error type.Changing
Service::ErrortoStringworks for the current handlers (they still always returnOk(Response)), but please double-check any call sites relying onInfallible. If any compile errors crop up, we’ll want to update those call sites to expectString.crates/transcribe-whisper-local/src/lib.rs (1)
22-25: Wrapping the service withHandleErroris the right move.This keeps the tests aligned with the new
Stringerror surface and cleanly maps failures to HTTP 500.plugins/listener/src/actors/listener.rs (1)
50-53: Monitor initialization needs error handling.
pg::monitorreturns aResult. Right now theErrpath is ignored; if monitoring fails we’ll proceed without supervision and never know. Please propagate or log the failure instead of dropping it.You can bubble the error like this:
- pg::monitor(tauri_plugin_local_stt::GROUP.into(), myself.get_cell()); + pg::monitor(tauri_plugin_local_stt::GROUP.into(), myself.get_cell()) + .map_err(|e| ActorProcessingErr::from(e))?;Likely an incorrect or invalid review comment.
plugins/local-stt/src/server/external.rs (1)
50-53: Prevent panic when reserving port
port_check::free_local_port()returnsNonewhen it cannot find a free port (port exhaustion, race, permission issues). The currentunwrap()turns that into a panic, tearing the actor down instead of letting supervision handle a recoverable failure.Apply this diff to propagate the error instead of panicking:
- let port = port_check::free_local_port().unwrap(); + let port = port_check::free_local_port() + .ok_or_else(|| { + std::io::Error::new( + std::io::ErrorKind::AddrNotAvailable, + "no free port available", + ) + })?;Likely an incorrect or invalid review comment.
Cargo.toml (1)
132-142: Workspace dependency bump looks consistent.Adding
ractor = "0.15"and bumpingbackonto1.5.2at the workspace level lines up with the actor-based refactor described in the PR goals, and there’s nothing in the repo that would clash with those versions. 👍plugins/local-stt/src/server/mod.rs (1)
4-4: PublicGROUPconstant works for actor grouping.Exposing the
"stt"GROUPconstant here (and re-exporting it) gives the rest of the codebase a single source of truth for the actor group identifier—solid improvement.plugins/local-stt/Cargo.toml (1)
77-83: Dependency additions align with the actor migration.Bringing
ractor,futures-util,tokio/tokio-util,tracing, andbackoninto the crate mirrors the new async actor flow; these are already workspace-managed, so it keeps versions consistent. Looks good.
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
plugins/local-stt/src/ext.rs (1)
556-562: Unwraps in background task can crash the task/thread.
calculate_file_checksum(...).unwrap()andremove_file(...).unwrap()can panic. Prefer handling failures (log and send -1) to avoid unexpected panics in the spawned task.
🧹 Nitpick comments (4)
plugins/local-stt/src/ext.rs (1)
475-488: Clamp and round download progress before casting to i8.Casting raw f64 to i8 risks overflow/underflow and jitter. Clamp to [0, 100] after rounding.
- DownloadProgress::Progress(downloaded, total_size) => { - let percent = (downloaded as f64 / total_size as f64) * 100.0; - let _ = channel.send(percent as i8); - } + DownloadProgress::Progress(downloaded, total_size) => { + let pct = ((downloaded as f64 / total_size as f64) * 100.0).round(); + let pct = pct.clamp(0.0, 100.0) as i8; + let _ = channel.send(pct); + }plugins/listener/src/actors/listener.rs (2)
184-221: Avoid.unwrap()on event emission pipeline.
emit(...).unwrap()will panic on transient front-end disconnects. Return/log errors instead to keep the stream running.
242-258: Avoid.unwrap()on DB upsert.
db_upsert_session(...).await.unwrap()can panic and kill the task. Prefer returning/logging the error and continuing.plugins/local-stt/src/server/external.rs (1)
48-52: Free-port probe is racy.
free_local_port().unwrap()followed by spawning a child to bind that port can race. Prefer letting the child choose the port or a retry loop on bind failure.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
plugins/listener/src/actors/listener.rs(8 hunks)plugins/local-stt/src/ext.rs(9 hunks)plugins/local-stt/src/lib.rs(1 hunks)plugins/local-stt/src/server/external.rs(1 hunks)plugins/local-stt/src/server/internal.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit configuration file
**/*.{js,ts,tsx,rs}: 1. Do not add any error handling. Keep the existing one.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/local-stt/src/lib.rsplugins/local-stt/src/ext.rsplugins/local-stt/src/server/internal.rsplugins/local-stt/src/server/external.rsplugins/listener/src/actors/listener.rs
🧬 Code graph analysis (4)
plugins/local-stt/src/ext.rs (2)
plugins/local-stt/src/server/external.rs (1)
name(33-35)plugins/local-stt/src/server/internal.rs (1)
name(33-35)
plugins/local-stt/src/server/internal.rs (2)
plugins/listener/src/actors/listener.rs (6)
tokio(122-122)tokio(123-123)name(36-38)pre_start(46-64)post_stop(66-76)handle(78-90)plugins/local-stt/src/server/external.rs (4)
name(33-35)pre_start(43-99)post_stop(134-152)handle(154-185)
plugins/local-stt/src/server/external.rs (4)
plugins/local-stt/src/server/internal.rs (4)
name(33-35)pre_start(43-91)post_stop(93-101)handle(103-121)crates/am/src/client.rs (3)
new(11-16)new(99-106)status(25-34)plugins/local-stt/src/ext.rs (8)
state(155-155)state(294-294)state(463-463)state(525-525)state(566-566)state(585-585)models_dir(23-23)models_dir(74-76)crates/host/src/lib.rs (1)
kill_processes_by_matcher(39-61)
plugins/listener/src/actors/listener.rs (3)
plugins/local-stt/src/server/internal.rs (2)
post_stop(93-101)handle(103-121)plugins/listener/src/actors/session.rs (3)
post_stop(241-272)handle_supervisor_evt(191-239)handle(118-189)crates/ws/src/client.rs (1)
finalize_with_text(23-27)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-14)
🔇 Additional comments (6)
plugins/local-stt/src/lib.rs (1)
18-18: Re-export widens public API surface; confirm intent and semver impact.
pub use server::*;exposes internal actor types/messages publicly. Ensure this is intentional and compatible with your semver guarantees.plugins/local-stt/src/ext.rs (2)
361-397: Good: coordinated shutdown via stop_and_wait.Using
stop_and_waitand handling the Result ensures clean, awaited shutdowns. This addresses prior races.
629-647: Use Duration forcall_t!timeout and avoid unwraps (already raised before).Switch numeric
10 * 1000to an explicitDuration. Also prefer returning the error upward instead of mapping toNone. This was flagged previously.-async fn internal_health() -> Option<(String, ServerHealth)> { +async fn internal_health() -> Option<(String, ServerHealth)> { + use std::time::Duration; match registry::where_is(internal::InternalSTTActor::name()) { Some(cell) => { let actor: ActorRef<internal::InternalSTTMessage> = cell.into(); - match call_t!(actor, internal::InternalSTTMessage::GetHealth, 10 * 1000) { + match call_t!(actor, internal::InternalSTTMessage::GetHealth, Duration::from_secs(10)) { Ok(r) => Some(r), Err(_) => None, } } None => None, } } -async fn external_health() -> Option<(String, ServerHealth)> { +async fn external_health() -> Option<(String, ServerHealth)> { + use std::time::Duration; match registry::where_is(external::ExternalSTTActor::name()) { Some(cell) => { let actor: ActorRef<external::ExternalSTTMessage> = cell.into(); - match call_t!(actor, external::ExternalSTTMessage::GetHealth, 10 * 1000) { + match call_t!(actor, external::ExternalSTTMessage::GetHealth, Duration::from_secs(10)) { Ok(r) => Some(r), Err(_) => None, } } None => None, } }plugins/listener/src/actors/listener.rs (2)
66-76: Graceful shutdown path looks good.Sending the shutdown signal before aborting the task increases chances of a clean teardown.
92-106: Stopping on any child failure is aggressive (previously flagged).Terminating the listener on any
ActorFailedremoves recovery paths. Consider a bounded retry/supervision strategy instead.plugins/local-stt/src/server/internal.rs (1)
119-120: Keep actor alive on request-level errors (previously flagged).Turning
ServerErrorintoErr(...)tears the actor down on a single failed request.- InternalSTTMessage::ServerError(e) => Err(e.into()), + InternalSTTMessage::ServerError(e) => { + tracing::error!("internal STT request failed: {}", e); + Ok(()) + }
| let (tx, rx_task, shutdown_tx) = spawn_rx_task(args, myself).await.unwrap(); | ||
| let state = ListenerState { | ||
| tx, | ||
| rx_task, | ||
| shutdown_tx: Some(shutdown_tx), | ||
| }; | ||
|
|
||
| Ok(state) | ||
| } |
There was a problem hiding this comment.
Don’t unwrap spawn_rx_task in pre_start.
.await.unwrap() will panic on setup failure. Propagate with ? so the actor fails to start cleanly.
- let (tx, rx_task, shutdown_tx) = spawn_rx_task(args, myself).await.unwrap();
+ let (tx, rx_task, shutdown_tx) = spawn_rx_task(args, myself).await?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let (tx, rx_task, shutdown_tx) = spawn_rx_task(args, myself).await.unwrap(); | |
| let state = ListenerState { | |
| tx, | |
| rx_task, | |
| shutdown_tx: Some(shutdown_tx), | |
| }; | |
| Ok(state) | |
| } | |
| let (tx, rx_task, shutdown_tx) = spawn_rx_task(args, myself).await?; | |
| let state = ListenerState { | |
| tx, | |
| rx_task, | |
| shutdown_tx: Some(shutdown_tx), | |
| }; | |
| Ok(state) | |
| } |
🤖 Prompt for AI Agents
In plugins/listener/src/actors/listener.rs around lines 56 to 64, the call to
spawn_rx_task uses `.await.unwrap()` which will panic on failure; replace the
unwrap with `?` to propagate the error, update the enclosing function's return
type to a Result (if not already) and adjust any error conversion (e.g., using
.map_err or Into) so the spawn_rx_task error can be returned to the actor
system; ensure shutdown_tx is wrapped the same way after using `?` so the
ListenerState construction remains unchanged.
| let (_server, _) = Actor::spawn( | ||
| Some(internal::InternalSTTActor::name()), | ||
| internal::InternalSTTActor, | ||
| internal::InternalSTTArgs { | ||
| model_cache_dir: cache_dir, | ||
| model_type: whisper_model, | ||
| }, | ||
| ) | ||
| .await | ||
| .unwrap(); |
There was a problem hiding this comment.
Avoid panicking on actor spawn failures.
Actor::spawn(...).await.unwrap() will crash the app if spawn fails. Propagate the error instead of unwrapping.
🤖 Prompt for AI Agents
In plugins/local-stt/src/ext.rs around lines 267 to 276, the code currently
calls Actor::spawn(...).await.unwrap(), which will panic on spawn failure;
change this to propagate the error instead of unwrapping by using the ? operator
(or map the spawn error into the function's error type and return Err) so the
caller can handle failures. Ensure the enclosing function returns a Result (or
compatible error type), propagate the Actor::spawn().await error through that
Result, and add any necessary conversions/mapping to match the function's error
type.
| let base_url = internal_health().await.map(|r| r.0).unwrap(); | ||
| Ok(base_url) |
There was a problem hiding this comment.
Health lookup unwrap can race and panic.
internal_health().await.map(|r| r.0).unwrap() will panic if the actor isn’t ready within the call_t! timeout. Return a proper error or retry rather than unwrapping.
🤖 Prompt for AI Agents
In plugins/local-stt/src/ext.rs around lines 278-279, the call
internal_health().await.map(|r| r.0).unwrap() can panic if the actor isn’t
ready; replace the unwrap with proper error handling by awaiting
internal_health(), matching on the Result (or using map_err/and_then) to return
a descriptive Err instead of panicking, or implement a short retry/backoff loop
with a timeout and then return an error if still unavailable—ensure the function
returns an appropriate error variant with context rather than calling unwrap.
| let (_server, _) = Actor::spawn( | ||
| Some(external::ExternalSTTActor::name()), | ||
| external::ExternalSTTActor, | ||
| external::ExternalSTTArgs { | ||
| cmd, | ||
| api_key: am_key, | ||
| model: am_model, | ||
| models_dir: data_dir, | ||
| }, | ||
| ) | ||
| .await | ||
| .unwrap(); |
There was a problem hiding this comment.
External actor spawn unwrap is unsafe.
Same as internal: Actor::spawn(...).await.unwrap() will bring down the process on failure.
🤖 Prompt for AI Agents
In plugins/local-stt/src/ext.rs around lines 331 to 342, the call
Actor::spawn(...).await.unwrap() will abort the process on failure; replace the
unwrap with proper error handling: call .await and either propagate the error
with the ? operator (adding a suitable return Result type) or match the Result
and log the failure with context (using the crate's logger or anyhow::Context)
and return an Err/early return. Ensure you preserve any cleanup or state and
include a clear contextual message when returning or logging the spawn error.
| let base_url = external_health().await.map(|v| v.0).unwrap(); | ||
| Ok(base_url) | ||
| } |
There was a problem hiding this comment.
External health unwrap can panic.
external_health().await.map(|v| v.0).unwrap() can panic if health fails. Avoid unwrap here as well.
🤖 Prompt for AI Agents
In plugins/local-stt/src/ext.rs around lines 344 to 346, the call
external_health().await.map(|v| v.0).unwrap() can panic if the health check
fails; replace the unwrap with proper error handling by propagating the failure
or returning a descriptive error instead. Specifically, await external_health(),
map or destructure its Ok variant to extract v.0, and return Err or use the ?
operator on the Result so the function returns an appropriate error instead of
panicking; ensure the error you return/log includes context that this is an
external health resolution failure.
| .retry( | ||
| ConstantBuilder::default() | ||
| .with_max_times(20) | ||
| .with_delay(std::time::Duration::from_millis(500)), | ||
| ) | ||
| .when(|e| { | ||
| tracing::error!("external_stt_init_failed: {:?}", e); | ||
| true | ||
| }) | ||
| .sleep(tokio::time::sleep) | ||
| .await?; |
There was a problem hiding this comment.
Retry budget likely too small for model init.
20 attempts x 500ms = ~10s. Model init can take much longer. Increase budget and consider exponential backoff to avoid premature failure.
- .retry(
- ConstantBuilder::default()
- .with_max_times(20)
- .with_delay(std::time::Duration::from_millis(500)),
- )
+ .retry(
+ ConstantBuilder::default()
+ .with_max_times(120) // ~60s at 500ms; tune per model size
+ .with_delay(std::time::Duration::from_millis(500)),
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .retry( | |
| ConstantBuilder::default() | |
| .with_max_times(20) | |
| .with_delay(std::time::Duration::from_millis(500)), | |
| ) | |
| .when(|e| { | |
| tracing::error!("external_stt_init_failed: {:?}", e); | |
| true | |
| }) | |
| .sleep(tokio::time::sleep) | |
| .await?; | |
| .retry( | |
| ConstantBuilder::default() | |
| .with_max_times(120) // ~60s at 500ms; tune per model size | |
| .with_delay(std::time::Duration::from_millis(500)), | |
| ) | |
| .when(|e| { | |
| tracing::error!("external_stt_init_failed: {:?}", e); | |
| true | |
| }) | |
| .sleep(tokio::time::sleep) | |
| .await?; |
🤖 Prompt for AI Agents
In plugins/local-stt/src/server/external.rs around lines 118 to 128, the retry
budget is only 20 attempts with a fixed 500ms delay (~10s) which is likely too
short for model initialization; change the retry strategy to allow a much larger
total wait (e.g. increase max attempts or total duration) and switch to
exponential backoff with jitter rather than a constant 500ms delay to avoid
premature failure and thundering retries; update the retry builder to use an
exponential/backoff policy (or increase delay progressively), raise max attempts
or permit a longer max elapsed time (e.g. minutes), and keep the tracing::error
logging inside the closure so failures are still logged while backing off.
| if let Err(e) = reply_port.send((state.base_url.clone(), status)) { | ||
| return Err(e.into()); | ||
| } |
There was a problem hiding this comment.
Don’t kill the actor when reply_port is closed.
If the caller drops the RPC channel (timeout/cancel), sending will fail and you return Err, terminating the actor. Treat send failure as benign.
- if let Err(e) = reply_port.send((state.base_url.clone(), status)) {
- return Err(e.into());
- }
+ let _ = reply_port.send((state.base_url.clone(), status));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if let Err(e) = reply_port.send((state.base_url.clone(), status)) { | |
| return Err(e.into()); | |
| } | |
| // Don’t fail the actor if the reply port has been closed. | |
| - if let Err(e) = reply_port.send((state.base_url.clone(), status)) { | |
| - return Err(e.into()); | |
| let _ = reply_port.send((state.base_url.clone(), status)); |
🤖 Prompt for AI Agents
In plugins/local-stt/src/server/external.rs around lines 178-180, the current
code returns Err when reply_port.send fails, which kills the actor if the RPC
channel was dropped; change this to treat send failures as benign: instead of
returning Err on send error, swallow it (e.g. match or if let Err(e) =
reply_port.send(...) { log/debug the error with context } ) and continue normal
execution so the actor is not terminated when the receiver has been closed.
| let server_task = tokio::spawn(async move { | ||
| axum::serve(listener, router) | ||
| .with_graceful_shutdown(async move { | ||
| shutdown_rx.changed().await.ok(); | ||
| }) | ||
| .await | ||
| .unwrap(); | ||
| }); | ||
|
|
There was a problem hiding this comment.
Server task .unwrap() can crash the actor thread.
If axum::serve(...).await errors, this .unwrap() will panic. Handle the error or log it without panicking.
- .await
- .unwrap();
+ .await
+ .unwrap_or_else(|e| tracing::error!("internal_stt_serve_error: {}", e));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let server_task = tokio::spawn(async move { | |
| axum::serve(listener, router) | |
| .with_graceful_shutdown(async move { | |
| shutdown_rx.changed().await.ok(); | |
| }) | |
| .await | |
| .unwrap(); | |
| }); | |
| let server_task = tokio::spawn(async move { | |
| axum::serve(listener, router) | |
| .with_graceful_shutdown(async move { | |
| shutdown_rx.changed().await.ok(); | |
| }) | |
| .await | |
| .unwrap_or_else(|e| tracing::error!("internal_stt_serve_error: {}", e)); | |
| }); |
🤖 Prompt for AI Agents
In plugins/local-stt/src/server/internal.rs around lines 77 to 85, the call to
axum::serve(...).await currently uses .unwrap() which will panic the actor
thread on error; replace the unwrap with proper error handling by awaiting the
serve result into a variable or matching on it and logging the error (e.g., if
let Err(e) = result { error!("HTTP server exited with error: {}", e); }) or
returning/propagating the error instead of panicking, ensuring the task exits
cleanly and does not bring down the whole actor.
| if let Err(e) = reply_port.send((state.base_url.clone(), status)) { | ||
| return Err(e.into()); | ||
| } |
There was a problem hiding this comment.
Do not tear down the actor on reply_port send failure.
If the caller timed out and dropped the port, reply_port.send fails and the actor returns Err, causing termination. Treat it as a no-op.
- if let Err(e) = reply_port.send((state.base_url.clone(), status)) {
- return Err(e.into());
- }
+ let _ = reply_port.send((state.base_url.clone(), status));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if let Err(e) = reply_port.send((state.base_url.clone(), status)) { | |
| return Err(e.into()); | |
| } | |
| let _ = reply_port.send((state.base_url.clone(), status)); |
🤖 Prompt for AI Agents
In plugins/local-stt/src/server/internal.rs around lines 113 to 115, the code
currently returns Err when reply_port.send fails which causes the actor to
terminate if the caller dropped the port; change the error handling so that send
failures are treated as a no-op: catch the Err(e) from reply_port.send,
optionally log/debug the failure, but do not return Err or propagate it — simply
continue execution so the actor is not torn down.
No description provided.