Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughSplits AI settings into separate LLM and STT views; adds/renames local-llm and local-stt plugin commands and model metadata (including Gemma3_4bQ4); refactors desktop license refresh to a status-driven, throttled schedule; updates bindings, permissions, locales, UI, and server shutdown/termination handling. Changes
Sequence Diagram(s)sequenceDiagram
participant LicenseComp as LicenseComponent
participant Hook as useLicense
participant Backend as keygen.validateCheckoutKey
LicenseComp->>Hook: getLicenseStatus()
Hook-->>LicenseComp: {needsRefresh,isValid}
LicenseComp->>LicenseComp: setTimeout(INITIAL_DELAY)
LicenseComp->>LicenseComp: setInterval(REFRESH_INTERVAL)
LicenseComp->>Hook: attemptRefresh()
alt needsRefresh && !pending && past RATE_LIMIT
Hook->>Backend: refreshLicense.mutate()
Backend-->>Hook: response
else
Hook-->>LicenseComp: skip
end
sequenceDiagram
participant Desktop as LLM UI
participant Bindings as local-llm TS bindings
participant Plugin as local-llm plugin
Desktop->>Bindings: listSupportedModel()
Bindings->>Plugin: invoke list_supported_model
Plugin-->>Bindings: Vec<ModelInfo>
Bindings-->>Desktop: ModelInfo[]
Desktop->>Desktop: render models / selection
sequenceDiagram
participant STTView as STT Local UI
participant Bindings as local-stt TS bindings
participant Plugin as local-stt plugin
STTView->>Bindings: getServers()
Bindings->>Plugin: invoke get_servers
Plugin-->>Bindings: {Internal: url?, External: url?}
Bindings-->>STTView: endpoints map
STTView->>Bindings: listProModels()
Bindings->>Plugin: invoke list_pro_models
Plugin-->>Bindings: ModelInfo[]
Bindings-->>STTView: pro models
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (7)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
cubic analysis
No issues found across 10 files. Review in cubic
There was a problem hiding this comment.
Actionable comments posted: 8
🔭 Outside diff range comments (9)
apps/desktop/src/components/toast/model-download.tsx (1)
26-42: Harden query enablement to avoid undefined access due to non-null assertions
checkForModelDownloadonly enables on STT model, but uses both STT and LLM; the two download-state queries can also run before their current model values exist. This can invoke commands withundefined. Enable queries only when their required inputs are present.const checkForModelDownload = useQuery({ - enabled: !!currentSttModel.data, + enabled: !!(currentSttModel.data && currentLlmModel.data), queryKey: ["check-model-downloaded"], queryFn: async () => { const [stt, llm] = await Promise.all([ localSttCommands.isModelDownloaded(currentSttModel.data!), localLlmCommands.isModelDownloaded(currentLlmModel.data!), ]); return { currentSttModel, sttModelDownloaded: stt, llmModelDownloaded: llm, }; }, refetchInterval: 5000, }); const sttModelDownloading = useQuery({ - enabled: !checkForModelDownload.data?.sttModelDownloaded, + enabled: !!currentSttModel.data && !checkForModelDownload.data?.sttModelDownloaded, queryKey: ["stt-model-downloading"], queryFn: async () => { return localSttCommands.isModelDownloading(currentSttModel.data!); }, refetchInterval: 3000, }); const llmModelDownloading = useQuery({ - enabled: !checkForModelDownload.data?.llmModelDownloaded, + enabled: !!currentLlmModel.data && !checkForModelDownload.data?.llmModelDownloaded, queryKey: ["llm-model-downloading"], queryFn: async () => { return localLlmCommands.isModelDownloading(currentLlmModel.data!); }, refetchInterval: 3000, });Also applies to: 44-51, 53-60
plugins/listener/src/fsm.rs (2)
584-585: Remove sensitive api_key from logs and log the actual API baseLogging tokens is a leak. Also, the logged api_base isn’t the one used after hardcoding.
- tracing::info!(api_base = ?api_base, api_key = ?api_key, languages = ?languages, "listen_client"); + tracing::info!(api_base = "http://localhost:50060", languages = ?languages, "listen_client");If you keep a variable for the base, prefer logging a redacted flag (e.g., api_key_present: api_key.is_empty() == false) instead of the token itself.
571-576: Avoid hardcoding the STT server URL; prefer discovery (with a safe fallback)Hardcoding “http://localhost:50060” couples the listener to one deployment shape and ignores the app’s connection data. With the new local-stt get_servers command available, you can discover the running endpoint and fall back to localhost if missing.
- let api_base = { - use tauri_plugin_connector::{Connection, ConnectorPluginExt}; - let conn: Connection = app.get_stt_connection().await?.into(); - conn.api_base - }; + // Prefer discovered internal server; fallback to default local port. + #[allow(unused)] + let api_base: String = { + #[cfg(feature = "local-stt")] + { + use tauri_plugin_local_stt::{server::ServerType, LocalSttPluginExt}; + app.get_servers() + .await + .ok() + .and_then(|m| m.get(&ServerType::Internal).cloned().flatten()) + .unwrap_or_else(|| "http://localhost:50060".to_string()) + } + #[cfg(not(feature = "local-stt"))] + { + "http://localhost:50060".to_string() + } + }; @@ - Ok(owhisper_client::ListenClient::builder() - .api_base("http://localhost:50060") + Ok(owhisper_client::ListenClient::builder() + .api_base(&api_base) .api_key(api_key) .params(owhisper_interface::ListenParams { languages, static_prompt, redemption_time_ms: if is_onboarding { 70 } else { 500 }, ..Default::default() }) .build_dual())If wiring the local-stt feature is out of scope here, at least keep the
get_stt_connectionresult and use itsapi_baseinstead of a hardcoded URL.Also applies to: 589-597
apps/desktop/src/components/welcome-modal/download-progress-view.tsx (1)
172-192: Remove in-component error handling (guideline: no error handling in TSX)Per coding guidelines, drop try/catch here. Let errors propagate to global handlers or rely on channel/error state already present.
Apply:
- const handleSttCompletion = async () => { - if (sttDownload.completed) { - try { - await localSttCommands.setCurrentModel(selectedSttModel); - await localSttCommands.startServer(null); - } catch (error) { - console.error("Error setting up STT:", error); - } - } - }; + const handleSttCompletion = async () => { + if (sttDownload.completed) { + await localSttCommands.setCurrentModel(selectedSttModel); + await localSttCommands.startServer(null); + } + }; @@ - const handleLlmCompletion = async () => { - if (llmDownload.completed && llmSelection === "hyprllm") { - try { - await localLlmCommands.setCurrentModel("HyprLLM"); - await localLlmCommands.startServer(); - } catch (error) { - console.error("Error setting up LLM:", error); - } - } - }; + const handleLlmCompletion = async () => { + if (llmDownload.completed && llmSelection === "hyprllm") { + await localLlmCommands.setCurrentModel("HyprLLM"); + await localLlmCommands.startServer(); + } + };apps/desktop/src/components/settings/components/ai/shared.tsx (1)
47-53: Align surrounding props with SupportedModel for stronger typing.LLMModel.key is now SupportedModel, but related props remain string-typed. Recommend tightening:
export interface SharedLLMProps { // Core State customLLMEnabled: UseQueryResult<boolean>; - selectedLLMModel: string; - setSelectedLLMModel: (model: string) => void; + selectedLLMModel: SupportedModel; + setSelectedLLMModel: (model: SupportedModel) => void; // Critical Mutations setCustomLLMEnabledMutation: UseMutationResult<null, Error, boolean, unknown>; // Model State downloadingModels: Set<string>; llmModelsState: LLMModel[]; // Functions - handleModelDownload: (modelKey: string) => Promise<void>; + handleModelDownload: (modelKey: SupportedModel) => Promise<void>; }If applicable, also revisit any query types (e.g., getCustomLLMModel) to return SupportedModel | null for consistency.
apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx (3)
172-176: Avoid leaking API key fragments in queryKey and tighten enabled conditionIncluding a slice of the API key in the React Query key exposes sensitive info in cache/devtools. Also, enabled should require a trimmed key for remote endpoints.
Apply this diff:
- queryKey: ["others-direct-models", debouncedApiBase, debouncedApiKey?.slice(0, 8)], + // Don't include secrets in the cache key + queryKey: ["others-direct-models", debouncedApiBase, Boolean(debouncedApiKey && debouncedApiKey.trim())], @@ - enabled: (() => { - const isLocal = debouncedApiBase?.includes("localhost") || debouncedApiBase?.includes("127.0.0.1"); - - try { - return Boolean(debouncedApiBase && new URL(debouncedApiBase) && (isLocal || debouncedApiKey)); - } catch { - return false; - } - })(), + enabled: (() => { + const isLocal = debouncedApiBase?.includes("localhost") || debouncedApiBase?.includes("127.0.0.1"); + try { + const hasValidKey = Boolean(debouncedApiKey && debouncedApiKey.trim().length > 0); + return Boolean(debouncedApiBase && new URL(debouncedApiBase) && (isLocal || hasValidKey)); + } catch { + return false; + } + })(),Also applies to: 215-224
163-170: Incorrect use of customForm.watch in deps; subscribe instead to avoid re-running and stale readsCalling watch() in the deps array re-evaluates on every render and can cause inconsistent behavior. Subscribe once and use the callback.
- useEffect(() => { - const apiBase = customForm.watch("api_base"); - const apiKey = customForm.watch("api_key"); - - updateDebouncedValues(apiBase || "", apiKey || ""); - }, [customForm.watch("api_base"), customForm.watch("api_key"), updateDebouncedValues]); + useEffect(() => { + const subscription = customForm.watch((values) => { + updateDebouncedValues(values.api_base || "", values.api_key || ""); + }); + return () => subscription.unsubscribe(); + }, [customForm, updateDebouncedValues]);
441-449: JSX/SVG prop: use fillRule instead of fill-ruleReact’s JSX expects camelCase SVG attributes. Using fill-rule will not apply as intended.
- fill-rule="evenodd" + fillRule="evenodd"apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)
95-102: Prevent model selection when clicking “Show in Finder”Clicking the button bubbles and triggers the parent card’s onClick, unintentionally selecting the model and restarting the server. Stop propagation.
- <Button + <Button size="sm" variant="outline" - onClick={handleShowFileLocation} + onClick={(e) => { + e.stopPropagation(); + handleShowFileLocation(); + }} className="text-xs h-7 px-2 flex items-center gap-1" >
🧹 Nitpick comments (24)
crates/am/src/model.rs (3)
47-53: MB vs MiB mismatch (keys say MB; code uses 1024-based bytes).Keys embed “MB”, but sizes use 1024-based computation (MiB). Either switch to decimal (1_000_000) or clarify naming to “MiB” to avoid confusion. Functionally fine; this is a clarity nit.
1-6: Derive Clone/Copy/Eq to improve ergonomics; ModelInfo should derive Clone.Public API will be easier to use and compare with these derives; Model can be Copy; ModelInfo cannot.
-#[derive(Debug, serde::Serialize, serde::Deserialize, specta::Type)] +#[derive(Debug, Clone, Copy, PartialEq, Eq, serde::Serialize, serde::Deserialize, specta::Type)] pub enum Model { ParakeetV2, WhisperLargeV3, WhisperSmallEn, } -#[derive(Debug, serde::Serialize, serde::Deserialize, specta::Type)] +#[derive(Debug, Clone, serde::Serialize, serde::Deserialize, specta::Type)] pub struct ModelInfo { pub key: String, pub name: String, pub size_bytes: u64, }Also applies to: 8-13
15-22: Consider From for ModelInfo for idiomatic conversion.Minor ergonomics: implement From (or From<&Model>) for ModelInfo so callers can do model.into().
Example (outside selected lines):
impl From<Model> for ModelInfo { fn from(m: Model) -> Self { m.info() } } // or From<&Model> if you prefer borrowingcrates/am/Cargo.toml (1)
14-14: Consider gating specta under a feature to reduce build surface.If TS bindings aren’t always needed, make specta optional and gate derives via cfg_attr. Keeps am lean for non-TS consumers.
Cargo.toml:
-[dependencies] +[features] +ts-bindings = ["dep:specta"] + +[dependencies] reqwest = { workspace = true, features = ["json"] } serde = { workspace = true, features = ["derive"] } serde_json = { workspace = true } -specta = { workspace = true, features = ["derive"] } +specta = { workspace = true, features = ["derive"], optional = true } thiserror = { workspace = true }And in Rust types (example):
#[cfg_attr(feature = "ts-bindings", derive(specta::Type))]crates/am/src/lib.rs (1)
8-8: Avoid glob re-export; re-export explicitly to keep public API tight.Prevents accidental export of future items.
-pub use model::*; +pub use model::{Model, ModelInfo};apps/desktop/src/components/toast/model-download.tsx (3)
94-101: Avoid premature toast when existence checks haven’t loadedIf
sttModelExists.dataorllmModelExists.datais undefined, the code treats them as “missing,” which can show the toast unnecessarily. Early-return until those queries resolve.useEffect(() => { if (!checkForModelDownload.data) { return; } + if (sttModelExists.isLoading || llmModelExists.isLoading) { + return; + }
127-131: Make the button label model-agnosticMultiple LLMs are now supported; avoid hardcoding HyprLLM in the CTA.
- buttonLabel = "Download HyprLLM v1"; + buttonLabel = "Download Model";
26-29: Key stability: include model in query keys to refresh when selection changesWhen current model changes, these queries should refetch based on keys.
const checkForModelDownload = useQuery({ enabled: !!(currentSttModel.data && currentLlmModel.data), - queryKey: ["check-model-downloaded"], + queryKey: ["check-model-downloaded", currentSttModel.data, currentLlmModel.data], ... const sttModelDownloading = useQuery({ - queryKey: ["stt-model-downloading"], + queryKey: ["stt-model-downloading", currentSttModel.data], ... const llmModelDownloading = useQuery({ - queryKey: ["llm-model-downloading"], + queryKey: ["llm-model-downloading", currentLlmModel.data],Also applies to: 44-47, 53-56
apps/desktop/src/components/toast/model-select.tsx (1)
45-46: Align CTA text with the new tabUpdate the label to reflect the transcription-specific destination.
- Open AI Settings + Open Transcription Settingsplugins/windows/src/ext.rs (1)
314-317: Non-resizable Settings window — verify UX on small/HiDPI displaysLocking resize can hinder readability/accessibility on scaled or low-res screens. Ensure content scrolls responsively and no controls become unreachable.
If constraints are needed, consider keeping a min size and allowing resize for flexibility.
apps/desktop/src/locales/ko/messages.po (1)
742-744: Untranslated visible label in KO locale"Enable Integration" is still English in msgstr. If this surfaces in UI, consider translating (e.g., "통합 사용").
apps/desktop/src/components/welcome-modal/model-selection-view.tsx (1)
14-17: Tighten ModelInfo typing to avoid castsModelInfo.model is typed as string, causing casts to WhisperModel. Prefer WhisperModel throughout.
Apply:
-interface ModelInfo { - model: string; - is_downloaded: boolean; -} +interface ModelInfo { + model: WhisperModel; + is_downloaded: boolean; +} @@ - const supportedSTTModels = useQuery<ModelInfo[]>({ + const supportedSTTModels = useQuery<ModelInfo[]>({ @@ - return models.map((model, index) => ({ + return models.map((model, index) => ({ model, is_downloaded: downloadedModels[index], })); @@ - onClick={() => setSelectedModel(model as WhisperModel)} + onClick={() => setSelectedModel(model)}This removes as-casts and strengthens type safety.
Also applies to: 52-64, 83-103
apps/desktop/src/components/welcome-modal/download-progress-view.tsx (2)
207-208: Drop “what” style comment per guidelineThe JSX comment narrates “what” rather than “why”. Remove or rephrase; simplest is to remove.
- {/* Replace static text with animated messages */} +
109-145: Add cleanup for Channel.onmessage to prevent stale handlersAvoid potential leaks or duplicate handlers if the effect re-runs.
useEffect(() => { @@ - if (llmSelection === "hyprllm") { + if (llmSelection === "hyprllm") { llmDownload.channel.onmessage = (progress) => { @@ - } - }, [selectedSttModel, sttDownload.channel, llmDownload.channel, llmSelection]); + } + return () => { + sttDownload.channel.onmessage = undefined as unknown as (v: number) => void; + if (llmSelection === "hyprllm") { + llmDownload.channel.onmessage = undefined as unknown as (v: number) => void; + } + }; + }, [selectedSttModel, sttDownload.channel, llmDownload.channel, llmSelection]);Note: adjust typings if Channel exposes a typed remove/close API; use it instead of undefined if available.
apps/desktop/src/components/settings/components/types.ts (1)
29-31: Use a distinct icon for ai-stt to match STT semantics and tab-icon mappingTo keep iconography consistent with the new split and tab-icon.tsx, use BirdIcon for "ai-stt".
Apply this diff within the TABS array:
- { name: "ai-llm", icon: Sparkles }, - { name: "ai-stt", icon: Sparkles }, + { name: "ai-llm", icon: Sparkles }, + { name: "ai-stt", icon: BirdIcon },And add the missing import (outside the changed range):
import { Bell, BlocksIcon, Calendar, CreditCard, LayoutTemplate, MessageSquare, Settings, Sparkles, Volume2, BirdIcon, // add this } from "lucide-react";plugins/local-stt/src/commands.rs (2)
37-45: Pro models list OK; consider centralizing to avoid drift.Hardcoding pro models here risks divergence from the am crate. If feasible, expose a curated list (e.g., hypr_am::Model::pro_models()) and map to .info() to keep a single source of truth.
118-124: Inspect TS binding for getServers: Partial<Record<ServerType, string | null>>
Our generated binding inplugins/local-stt/js/bindings.gen.tsis:async getServers(): Promise<Partial<{ [key in ServerType]: string | null }>>;
That means any
ServerTypemapped toNonein Rust will be omitted (undefined) in JS, and property order follows insertion. If your UI or tests expect:
- All enum cases to be present (even when
null), or- A consistent iteration order over
ServerTypekeysplease review and choose one of the following:
• In Rust (
plugins/local-stt/src/commands.rs), switch fromHashMap<…>tostd::collections::BTreeMap<…>for deterministic key order, and append- app.get_servers().await.map_err(|e| e.to_string()) + app.get_servers().await.map_err(|e| e.to_string()) + .map(|m| m.into_iter().collect())so Specta still generates a Record but in sorted order.
• In JS/TS (
stt-view-local.tsxor wherever you consumeservers.data), post-process the partial object to:
– Inject missing keys withnull
– Build a sorted array viaObject.keys(…).sort()before mapping/renderingEnsure your component code handles
servers.data[key]beingundefinedversusnull.plugins/local-llm/src/model.rs (2)
24-42: Lookup tables updated correctly.file_name, model_url, model_size entries for Gemma3_4bQ4 are consistent. If you have a checksum/size source of truth, consider adding a unit test that validates these constants.
44-50: Checksum API addition is fine.Values are present for all variants. Consider doc-commenting how to regenerate checksums to avoid drift.
plugins/local-stt/js/bindings.gen.ts (1)
64-64: Consider ModelInfo type consistency across plugins.The ModelInfo type here differs from the local-llm version (plugins/local-llm/js/bindings.gen.ts, Line 57) - local-llm includes a
descriptionfield and usesSupportedModelfor the key, while local-stt uses a plainstringfor the key.Consider standardizing the ModelInfo interface across plugins if they represent similar concepts, or use different type names to avoid confusion.
apps/desktop/src/components/settings/views/ai-stt.tsx (2)
16-16: Consider removing unused state.The
isWerModalOpenstate andsetIsWerModalOpenare defined but may not be used effectively. Verify if this state is needed or if it's leftover from refactoring.Also applies to: 50-51
80-130: Consider centralizing model definitions.The
initialSttModelsarray contains hardcoded model data that appears to be duplicated across components. Consider moving this to a shared constants file or configuration module to avoid duplication and ensure consistency.apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx (1)
180-182: Remove stray console logsLeftover debug logs will spam console in production. Remove them or guard behind a dev flag.
- console.log("onquery"); - console.log(url.toString()); + // Why: avoid noisy logs in productionapps/desktop/src/components/settings/views/ai-llm.tsx (1)
383-391: Auto-switch to Remote tab when Custom LLM is enabledCurrently, the accordion selection changes but the active tab may remain “Local,” hiding the Remote configuration. Switch tabs for clearer UX.
useEffect(() => { if (providerSourceQuery.data) { setOpenAccordion(providerSourceQuery.data as "openai" | "gemini" | "openrouter" | "others"); } else if (customLLMEnabled.data) { - setOpenAccordion("others"); + setOpenAccordion("others"); + setActiveTab("remote"); } else { setOpenAccordion(null); } }, [providerSourceQuery.data, customLLMEnabled.data, setOpenAccordion]);
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (47)
apps/desktop/src/components/license.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/llm-local-view.tsx(3 hunks)apps/desktop/src/components/settings/components/ai/shared.tsx(2 hunks)apps/desktop/src/components/settings/components/ai/stt-view-local.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/stt-view.tsx(0 hunks)apps/desktop/src/components/settings/components/index.ts(0 hunks)apps/desktop/src/components/settings/components/tab-icon.tsx(2 hunks)apps/desktop/src/components/settings/components/types.ts(2 hunks)apps/desktop/src/components/settings/components/wer-modal.tsx(0 hunks)apps/desktop/src/components/settings/views/ai-llm.tsx(6 hunks)apps/desktop/src/components/settings/views/ai-stt.tsx(1 hunks)apps/desktop/src/components/settings/views/index.ts(1 hunks)apps/desktop/src/components/toast/model-download.tsx(1 hunks)apps/desktop/src/components/toast/model-select.tsx(1 hunks)apps/desktop/src/components/welcome-modal/download-progress-view.tsx(1 hunks)apps/desktop/src/components/welcome-modal/model-selection-view.tsx(1 hunks)apps/desktop/src/hooks/use-license.ts(5 hunks)apps/desktop/src/locales/en/messages.po(35 hunks)apps/desktop/src/locales/ko/messages.po(35 hunks)apps/desktop/src/routes/app.settings.tsx(3 hunks)crates/am/Cargo.toml(1 hunks)crates/am/src/lib.rs(1 hunks)crates/am/src/model.rs(1 hunks)plugins/listener/src/fsm.rs(3 hunks)plugins/local-llm/build.rs(1 hunks)plugins/local-llm/js/bindings.gen.ts(2 hunks)plugins/local-llm/permissions/autogenerated/commands/list_supported_model.toml(1 hunks)plugins/local-llm/permissions/autogenerated/reference.md(2 hunks)plugins/local-llm/permissions/default.toml(1 hunks)plugins/local-llm/permissions/schemas/schema.json(2 hunks)plugins/local-llm/src/commands.rs(2 hunks)plugins/local-llm/src/lib.rs(1 hunks)plugins/local-llm/src/model.rs(1 hunks)plugins/local-stt/build.rs(1 hunks)plugins/local-stt/js/bindings.gen.ts(2 hunks)plugins/local-stt/permissions/autogenerated/commands/get_servers.toml(1 hunks)plugins/local-stt/permissions/autogenerated/commands/list_pro_models.toml(1 hunks)plugins/local-stt/permissions/autogenerated/reference.md(3 hunks)plugins/local-stt/permissions/default.toml(1 hunks)plugins/local-stt/permissions/schemas/schema.json(3 hunks)plugins/local-stt/src/commands.rs(3 hunks)plugins/local-stt/src/ext.rs(3 hunks)plugins/local-stt/src/lib.rs(1 hunks)plugins/local-stt/src/server/mod.rs(1 hunks)plugins/windows/src/ext.rs(1 hunks)
💤 Files with no reviewable changes (3)
- apps/desktop/src/components/settings/components/index.ts
- apps/desktop/src/components/settings/components/wer-modal.tsx
- apps/desktop/src/components/settings/components/ai/stt-view.tsx
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit Configuration File
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/windows/src/ext.rsapps/desktop/src/components/welcome-modal/model-selection-view.tsxapps/desktop/src/components/welcome-modal/download-progress-view.tsxcrates/am/src/model.rsapps/desktop/src/components/settings/components/ai/stt-view-remote.tsxapps/desktop/src/components/toast/model-select.tsxapps/desktop/src/components/settings/components/tab-icon.tsxapps/desktop/src/components/settings/views/ai-stt.tsxapps/desktop/src/components/settings/views/index.tsplugins/local-stt/build.rsplugins/local-llm/build.rsplugins/local-stt/src/server/mod.rsapps/desktop/src/components/license.tsxplugins/local-llm/src/lib.rsapps/desktop/src/components/toast/model-download.tsxapps/desktop/src/components/settings/components/ai/stt-view-local.tsxapps/desktop/src/components/settings/components/types.tsplugins/local-stt/js/bindings.gen.tsplugins/local-stt/src/ext.rsplugins/listener/src/fsm.rsapps/desktop/src/components/settings/components/ai/llm-custom-view.tsxplugins/local-llm/js/bindings.gen.tscrates/am/src/lib.rsplugins/local-llm/src/commands.rsplugins/local-stt/src/lib.rsplugins/local-stt/src/commands.rsapps/desktop/src/routes/app.settings.tsxapps/desktop/src/components/settings/components/ai/shared.tsxplugins/local-llm/src/model.rsapps/desktop/src/hooks/use-license.tsapps/desktop/src/components/settings/components/ai/llm-local-view.tsxapps/desktop/src/components/settings/views/ai-llm.tsx
🧬 Code Graph Analysis (16)
crates/am/src/model.rs (1)
plugins/local-stt/js/bindings.gen.ts (1)
ModelInfo(64-64)
apps/desktop/src/components/settings/views/ai-stt.tsx (5)
apps/desktop/src/components/toast/shared.tsx (1)
showSttModelDownloadToast(55-91)apps/desktop/src/components/settings/components/ai/shared.tsx (2)
SharedSTTProps(91-98)STTModel(55-61)apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (1)
STTViewLocal(107-208)apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx (1)
STTViewRemote(3-41)apps/desktop/src/components/settings/views/ai.tsx (1)
LocalAI(233-909)
apps/desktop/src/components/license.tsx (1)
apps/desktop/src/hooks/use-license.ts (1)
useLicense(8-102)
plugins/local-llm/src/lib.rs (2)
plugins/local-llm/js/bindings.gen.ts (1)
commands(9-46)plugins/local-llm/src/commands.rs (1)
list_supported_model(13-34)
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (4)
plugins/local-stt/js/bindings.gen.ts (1)
WhisperModel(68-68)apps/desktop/src/components/settings/components/ai/shared.tsx (2)
SharedSTTProps(91-98)STTModel(55-61)packages/ui/src/lib/utils.ts (1)
cn(4-6)packages/ui/src/components/ui/button.tsx (1)
Button(37-89)
plugins/local-stt/js/bindings.gen.ts (1)
plugins/local-llm/js/bindings.gen.ts (1)
ModelInfo(58-58)
plugins/local-stt/src/ext.rs (5)
plugins/local-stt/src/commands.rs (1)
get_servers(120-124)plugins/local-stt/js/bindings.gen.ts (1)
ServerType(66-66)apps/desktop/src-tauri/src/ext.rs (1)
state(129-129)plugins/connector/src/ext.rs (1)
state(96-96)plugins/local-llm/src/ext.rs (6)
state(47-47)state(54-54)state(83-83)state(118-118)state(170-170)state(189-189)
plugins/listener/src/fsm.rs (4)
crates/whisper-local/src/model.rs (2)
languages(31-34)static_prompt(36-39)owhisper/owhisper-client/src/lib.rs (1)
api_key(42-45)crates/whisper-cloud/src/client.rs (1)
api_key(28-31)crates/pyannote-cloud/src/lib.rs (1)
api_key(22-25)
plugins/local-llm/js/bindings.gen.ts (1)
plugins/local-stt/js/bindings.gen.ts (1)
ModelInfo(64-64)
plugins/local-llm/src/commands.rs (2)
plugins/local-llm/js/bindings.gen.ts (2)
ModelInfo(58-58)SupportedModel(59-59)plugins/local-llm/src/model.rs (1)
model_size(36-42)
plugins/local-stt/src/lib.rs (3)
plugins/local-stt/js/bindings.gen.ts (1)
commands(9-46)plugins/local-stt/src/commands.rs (4)
get_servers(120-124)start_server(98-105)stop_server(109-116)list_pro_models(39-45)plugins/local-stt/src/ext.rs (6)
get_servers(29-31)get_servers(209-225)start_server(21-24)start_server(106-172)stop_server(25-28)stop_server(175-206)
plugins/local-stt/src/commands.rs (3)
plugins/local-stt/js/bindings.gen.ts (2)
ServerType(66-66)ModelInfo(64-64)crates/am/src/model.rs (1)
info(16-22)plugins/local-stt/src/ext.rs (2)
get_servers(29-31)get_servers(209-225)
apps/desktop/src/components/settings/components/ai/shared.tsx (1)
plugins/local-llm/js/bindings.gen.ts (1)
SupportedModel(59-59)
plugins/local-llm/src/model.rs (2)
plugins/local-stt/js/bindings.gen.ts (1)
ModelInfo(64-64)plugins/local-llm/js/bindings.gen.ts (2)
ModelInfo(58-58)SupportedModel(59-59)
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (2)
apps/desktop/src/components/settings/components/ai/shared.tsx (1)
LLMModel(46-53)plugins/local-llm/js/bindings.gen.ts (1)
SupportedModel(59-59)
apps/desktop/src/components/settings/views/ai-llm.tsx (4)
apps/desktop/src/components/settings/components/ai/shared.tsx (1)
LLMModel(46-53)plugins/local-llm/js/bindings.gen.ts (1)
SupportedModel(59-59)packages/ui/src/components/ui/tabs.tsx (3)
TabsList(53-53)TabsTrigger(53-53)Tabs(53-53)apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)
LLMLocalView(11-136)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: ci (macos, macos-latest)
- GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (56)
crates/am/src/lib.rs (1)
3-3: Module addition LGTM.Clear separation of model concerns is good.
apps/desktop/src/hooks/use-license.ts (1)
20-21: Sanity check passed: timing constants and React Query API verified
- No remaining
shouldRefresh()calls found.- All packages use
@tanstack/react-query@^5.84.0, which provides theisPendingmutation state.refetchInterval: 5m+refetchIntervalInBackground: truealign with the 7 d TTL, 3 d refresh threshold, 30 m provider check, and 1 h rate limit.apps/desktop/src/components/toast/model-download.tsx (1)
84-87: Gemma3_4bQ4 included in LLM existence check — good additionThis aligns existence detection with the new supported model. No further changes needed here.
apps/desktop/src/components/toast/model-select.tsx (1)
17-17: Navigation target updated to ai-stt — aligned with tab splitThe new target matches the “Transcription” tab. Looks correct.
plugins/local-stt/src/lib.rs (1)
39-43: Specta bindings updated for get_servers and list_pro_models — goodMatches the new commands and enables TS bindings to consume them.
apps/desktop/src/components/settings/components/tab-icon.tsx (1)
4-4: New AI tab icons wired correctly
- ai-llm → SparklesIcon
- ai-stt → BirdIcon
No unused imports; mapping is consistent.Also applies to: 26-30
plugins/local-stt/src/server/mod.rs (1)
4-6: Derives expanded for ServerType — necessary for map keys and ergonomicsThis unblocks get_servers returning HashMap<ServerType, _>.
plugins/listener/src/fsm.rs (1)
568-569: Intentional unused parameter marker is fineThe
_jargonsnaming cleanly silences warnings. Leaving this here to note the intent.plugins/local-llm/permissions/schemas/schema.json (1)
381-392: Permission schema extended for list_supported_model — consistent with API changesNew allow/deny kinds and default set update are coherent with the ModelInfo-based flow.
Also applies to: 454-458
plugins/local-stt/permissions/schemas/schema.json (3)
478-482: Default permission set documentation updated correctlyThe default set now enumerates allow-get-servers and allow-list-pro-models. Consistent with added kinds.
321-332: Permission Kinds Verified for get_servers
- plugins/local-stt/permissions/autogenerated/commands/get_servers.toml exists
- plugins/local-stt/permissions/default.toml includes
"allow-get-servers"- plugins/local-stt/permissions/autogenerated/reference.md includes
local-stt:deny-get-servers- Code references to
get_serversin src/ align with the command nameNo issues found. LGTM.
393-404: Verified list_pro_models permissions integrationAll relevant files and references are present and correctly named:
- plugins/local-stt/permissions/autogenerated/commands/list_pro_models.toml exists
- Default permissions include
"allow-list-pro-models"in plugins/local-stt/permissions/default.toml- Code references
commands::list_pro_modelsin plugins/local-stt/src/lib.rs and implementslist_pro_modelsin plugins/local-stt/src/commands.rsNo further changes required.
apps/desktop/src/locales/ko/messages.po (2)
267-275: Placeholder change acknowledgedTemplate placeholder msgid "{0}" remains intact; no i18n interpolation risks introduced. LGTM.
990-994: Please confirm intent for Korean translations inapps/desktop/src/locales/ko/messages.poOur check shows 1,679 entries with empty
msgstrin the Korean locale, including the newly added keys:
- msgid "Local"
- msgid "Remote"
- msgid "Summarization"
- msgid "Transcription"
If you intend to ship with English fallbacks, please annotate or mark these untranslated entries accordingly. Otherwise, add the appropriate Korean translations (or flag them for later) before merging.
apps/desktop/src/components/welcome-modal/model-selection-view.tsx (1)
12-12: Import source switch to stt-view-local is correctsttModelMetadata is now centralized under the local STT view; usage remains unchanged. Good move.
plugins/local-stt/build.rs (1)
9-9: Registering get_servers and list_pro_modelsAdditions are consistent with the rest of the stack (commands.rs, bindings, schema). No further changes needed here.
Also applies to: 13-13
apps/desktop/src/components/welcome-modal/download-progress-view.tsx (1)
11-11: Import source switch to stt-view-local is correctCentralizes STT metadata under the Local view; usage unchanged. OK.
plugins/local-stt/permissions/default.toml (1)
10-10: Add default permissions for new commands — LGTMallow-get-servers and allow-list-pro-models align with the new command surfaces and schema updates.
Also applies to: 14-14
plugins/local-llm/build.rs (1)
13-13: Command registry updated — LGTMlist_supported_model correctly reflects the renamed API and enables Specta/bindings generation.
plugins/local-llm/src/lib.rs (1)
40-41: Specta registry uses list_supported_model — LGTMMatches the commands.rs rename and updated return type. No further changes needed here.
plugins/local-llm/permissions/default.toml (1)
15-16: Summary mismatch: both downloaded and supported list permissions are presentThe AI summary states “replaced” allow-list-downloaded-model, but the file keeps it and adds allow-list-supported-model. This looks correct given both commands exist.
apps/desktop/src/components/settings/views/index.ts (1)
1-2: All LocalAI references have been removed
A repository-wide search for “LocalAI” returned no matches, confirming that all imports and usages have been updated to AILLM/AISTT.plugins/local-stt/permissions/autogenerated/commands/get_servers.toml (1)
1-13: Autogenerated permissions manifest for get_servers — LGTMThe allow/deny entries and schema reference are consistent with the existing pattern.
apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx (1)
3-41: Remote STT placeholder component — LGTMClean, minimal, and consistent with the new split views.
plugins/local-stt/src/commands.rs (2)
1-2: Imports are correct and used.HashMap is used by get_servers, Channel by download_model. No unused imports.
4-4: Correct dependency usage.ServerType and LocalSttPluginExt are required for new commands; alignment matches plugins/local-stt/src/ext.rs.
apps/desktop/src/routes/app.settings.tsx (3)
11-13: Split views imported correctly.AILLM/AISTT imports align with the tab split. No unused imports.
134-135: Conditional rendering aligns with new tabs.Placement and order look fine and match titles.
53-56: Fix inconsistent icon for “ai-stt” in TABS mappingThe tab names are correctly added, but the icon for “ai-stt” in the TABS array is still
Sparkles. InTabIcon, “ai-stt” renders aBirdIcon, so these should be aligned to avoid mismatches at runtime.• apps/desktop/src/components/settings/components/types.ts
@@ export const TABS: { name: Tab; icon: LucideIcon }[] = [ { name: "ai-llm", icon: Sparkles }, - { name: "ai-stt", icon: Sparkles }, + { name: "ai-stt", icon: BirdIcon }, ]– Don’t forget to import
BirdIconfromlucide-reactalongside the other icons.Likely an incorrect or invalid review comment.
plugins/local-llm/src/model.rs (1)
4-10: ModelInfo addition is consistent with bindings.Fields and derives align with TS type and specta. Looks good.
plugins/local-llm/js/bindings.gen.ts (2)
58-59: Types align with Rust.ModelInfo and the extended SupportedModel (Gemma3_4bQ4) match the Rust-side changes.
13-15: No outdated LLM command usages; renaming is safe
All instances oflistSupportedModels()belong to the local-stt plugin and remain unchanged. The local-llm code only calls the newlistSupportedModel(), so there are no stale callsites to update.apps/desktop/src/components/settings/components/ai/shared.tsx (2)
2-2: Type import is correct and tree-shakable.Using a type-only import avoids runtime cost. Good.
6-7: cn import used.No unused imports; consistent with usage in RatingDisplay.
plugins/local-stt/permissions/autogenerated/commands/list_pro_models.toml (1)
1-14: LGTM! Autogenerated permissions file follows expected pattern.The permission file structure is correct with proper schema reference and standard allow/deny permission pairs for the
list_pro_modelscommand.plugins/local-stt/js/bindings.gen.ts (2)
34-36: LGTM! New getServers command properly typed.The return type correctly maps ServerType to optional API base URLs.
43-45: LGTM! New listProModels command implementation.Command properly invokes the backend and returns ModelInfo array.
plugins/local-llm/permissions/autogenerated/reference.md (2)
18-18: LGTM! Default permission correctly updated.The new
allow-list-supported-modelpermission is properly added to the default permission set.
214-235: LGTM! Permission table entries properly added.The allow and deny permission entries for
list_supported_modelcommand are correctly documented in the table format.plugins/local-stt/src/ext.rs (3)
1-1: LGTM! HashMap import added for new functionality.The import is necessary for the new get_servers method return type.
29-31: LGTM! Trait method signature is well-defined.The get_servers method properly returns a HashMap mapping ServerType to optional API base URLs, which aligns with the expected use case.
208-225: LGTM! Implementation follows established patterns.The implementation properly:
- Uses tracing instrumentation consistently with other methods
- Locks shared state safely
- Extracts API base URLs from both server types
- Uses a clean array-to-HashMap conversion pattern
- Handles the Option wrapping correctly
plugins/local-stt/permissions/autogenerated/reference.md (2)
13-13: LGTM! Default permissions properly updated.The new
allow-get-serversandallow-list-pro-modelspermissions are correctly added to the default permission set.Also applies to: 17-17
82-104: LGTM! Permission table entries correctly documented.Both get_servers and list_pro_models commands have their allow/deny permission entries properly documented in the table format.
Also applies to: 239-260
plugins/local-llm/permissions/autogenerated/commands/list_supported_model.toml (1)
1-14: LGTM! Autogenerated permissions file properly structured.The permission file correctly defines allow/deny permissions for the
list_supported_modelcommand with proper schema reference.apps/desktop/src/components/settings/views/ai-stt.tsx (1)
12-78: LGTM! Component structure and state management are well implemented.The tabbed interface, state management, and component organization follow React best practices. The translation support and query client usage are properly implemented.
plugins/local-llm/src/commands.rs (2)
1-1: LGTM! Import statement properly updated.The imports now include the necessary ModelInfo and SupportedModel types for the refactored function.
13-34: LGTM! Function refactoring improves API design.The changes provide several improvements:
- Function name is more consistent (singular form)
- Returns rich metadata via ModelInfo instead of just enum values
- Provides helpful descriptions for each model
- Uses proper size calculation via
model_size()methodThe hardcoded approach is appropriate for a controlled set of supported models.
apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx (1)
154-161: Ignore incorrect suggestion on useDebouncedCallback signatureThe
useDebouncedCallbackhook from beautiful-react-hooks expects arguments as(fn, dependencies?, wait?, options?). Your current usage—passing[]as the dependencies array and2000as the debounce delay—is correct. No changes are required.Likely an incorrect or invalid review comment.
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)
35-43: Selection flow looks soundSelecting a downloaded model updates current model, disables custom LLM, and restarts the server. This aligns with the desired UX.
apps/desktop/src/components/settings/views/ai-llm.tsx (2)
122-135: LGTM: Dynamic LLM model list initializationListing supported models and normalizing to LLMModel[] with size formatting is clean and straightforward.
186-199: LGTM: Download status typing with satisfiesUsing satisfies Record<SupportedModel, boolean> keeps keys aligned with the union and prevents drift.
apps/desktop/src/locales/en/messages.po (1)
990-997: Translations align with new UINew strings for Local/Remote, Loading available models…, Model Name, and Autonomy Selector match the updated components.
Also applies to: 1248-1252, 970-973, 619-620
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (3)
158-167: LGTM: Download status propagationPolling and mapping download status into sttModels via setSttModels is correct and efficient.
304-323: LGTM: Safe selection and server restart flowGuarding click by model.downloaded and restarting the server ensures consistent state after selection.
225-227: Show in Finder handling is solidOpening the models directory via modelsDir + openPath and stopping propagation (in download button) prevents unintended selection. Mirrors the recommended LLM fix.
Also applies to: 337-349
cbf764d to
d8b126e
Compare
There was a problem hiding this comment.
Actionable comments posted: 12
🔭 Outside diff range comments (3)
crates/am/src/lib.rs (1)
15-21: Avoid live-network unit tests; mark ignored and make the assertion meaningfulThis test calls a real server on localhost:50060 and always passes with assert!(true). Make it explicit that it requires a running server and avoid noise.
- #[tokio::test] - async fn test_client_creation() { - let client = Client::new("http://localhost:50060"); - let status = client.status().await; - println!("{:?}", status); - assert!(true); - } + #[tokio::test] + #[ignore = "requires local AM server at http://localhost:50060"] + async fn test_client_creation() { + let client = Client::new("http://localhost:50060"); + let status = client.status().await; + // Verify the status call succeeds when the server is running. + assert!(status.is_ok(), "status endpoint errored: {:?}", status.err()); + }apps/desktop/src/hooks/use-license.ts (1)
41-44: Remove local onError handlers; violates “No error handling” guideline and risks logging sensitive data.Per repo guideline, avoid local error handling; also don’t log raw errors (may contain PII). Rely on react-query statuses and central UI/telemetry instead.
Apply these diffs to drop local handlers:
@@ return license; }, - onError: (e) => { - console.error(e); - queryClient.setQueryData(LICENSE_QUERY_KEY, null); - }, onSuccess: (license) => { queryClient.setQueryData(LICENSE_QUERY_KEY, license); }, });@@ }); - onError: console.error, onSuccess: () => { queryClient.invalidateQueries({ queryKey: LICENSE_QUERY_KEY }); }, });@@ }); - onError: console.error, onSuccess: () => { queryClient.setQueryData(LICENSE_QUERY_KEY, null); }, });Also applies to: 77-77, 91-91
apps/desktop/src/locales/ko/messages.po (1)
16-20: Fill missing Korean translations for new AI settings stringsThe KO locale file (
apps/desktop/src/locales/ko/messages.po) contains several new or relocatedmsgidentries with emptymsgstrvalues, which will fall back to English if not translated:
- "Type terms separated by commas (e.g., Blitz Meeting, PaC Squad)" (src/components/settings/views/general.tsx:362)
- "{days} day{plural} later" (packages/utils/src/datetime.ts:22)
- "{hours} hour{plural} later" (packages/utils/src/datetime.ts:27)
- "{minutes} minutes later" (packages/utils/src/datetime.ts:32)
- Additional keys (e.g., Local, Remote, Autonomy Selector, provider labels)
Please fill in the Korean translations or mark these entries as fuzzy before release to avoid mixed-language UI.
♻️ Duplicate comments (2)
apps/desktop/src/hooks/use-license.ts (1)
2-2: Solid fix: memoized getLicenseStatus and ms-based expiry window.Addressed prior feedback: useCallback stabilizes identity; msUntilExpiry removes the off-by-one near expiry. Looks correct.
Also applies to: 50-65
plugins/local-llm/src/model.rs (1)
1-3: Gemma3_4bQ4 correctly added to SUPPORTED_MODELS.This resolves the prior omission so it appears in discovery flows.
🧹 Nitpick comments (4)
apps/desktop/src/hooks/use-license.ts (1)
60-64: Nit: extract MS_PER_DAY to improve readability and reuse.@@ -const REFRESH_THRESHOLD_DAYS = 3; +const REFRESH_THRESHOLD_DAYS = 3; +const MS_PER_DAY = 24 * 60 * 60 * 1000; @@ - return { - needsRefresh: msUntilExpiry > 0 - && msUntilExpiry <= REFRESH_THRESHOLD_DAYS * 24 * 60 * 60 * 1000, - isValid: msUntilExpiry > 0, - }; + return { + needsRefresh: + msUntilExpiry > 0 && + msUntilExpiry <= REFRESH_THRESHOLD_DAYS * MS_PER_DAY, + isValid: msUntilExpiry > 0, + };plugins/local-stt/src/server/internal.rs (1)
54-58: Avoid silently ignoring shutdown send resultIf there are no receivers, send() returns Err and is silently dropped. Consider low-level logging for visibility.
- let _ = self.shutdown.send(()); + if self.shutdown.send(()).is_err() { + tracing::debug!("local_stt_internal_terminate_no_receivers"); + }plugins/local-stt/src/server/external.rs (1)
21-21: Unused shutdown receiver; either use it or remove itThere’s no consumer of _shutdown_rx, and terminate() kills the child directly. Either wire the receiver to a graceful shutdown path or drop the channel entirely.
- let (shutdown_tx, _shutdown_rx) = tokio::sync::watch::channel(()); + let (shutdown_tx, _shutdown_rx) = tokio::sync::watch::channel(()); // Keep only if you plan to use it for graceful shutdownIf not planning to use it, remove the channel and the shutdown field.
plugins/local-stt/src/ext.rs (1)
170-172: Enrich missing-binary error with path contextAmBinaryNotFound carries no context. Consider including which path(s) were missing to ease debugging.
For example, make AmBinaryNotFound(PathBuf) and return the missing path.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (51)
apps/desktop/src/components/license.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/llm-local-view.tsx(3 hunks)apps/desktop/src/components/settings/components/ai/shared.tsx(2 hunks)apps/desktop/src/components/settings/components/ai/stt-view-local.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/stt-view.tsx(0 hunks)apps/desktop/src/components/settings/components/index.ts(0 hunks)apps/desktop/src/components/settings/components/tab-icon.tsx(2 hunks)apps/desktop/src/components/settings/components/types.ts(2 hunks)apps/desktop/src/components/settings/components/wer-modal.tsx(0 hunks)apps/desktop/src/components/settings/views/ai-llm.tsx(6 hunks)apps/desktop/src/components/settings/views/ai-stt.tsx(1 hunks)apps/desktop/src/components/settings/views/index.ts(1 hunks)apps/desktop/src/components/toast/model-download.tsx(1 hunks)apps/desktop/src/components/toast/model-select.tsx(1 hunks)apps/desktop/src/components/welcome-modal/download-progress-view.tsx(1 hunks)apps/desktop/src/components/welcome-modal/model-selection-view.tsx(1 hunks)apps/desktop/src/hooks/use-license.ts(5 hunks)apps/desktop/src/locales/en/messages.po(35 hunks)apps/desktop/src/locales/ko/messages.po(35 hunks)apps/desktop/src/routes/app.settings.tsx(3 hunks)crates/am/Cargo.toml(1 hunks)crates/am/src/client.rs(2 hunks)crates/am/src/lib.rs(2 hunks)crates/am/src/model.rs(1 hunks)plugins/listener/src/fsm.rs(3 hunks)plugins/local-llm/build.rs(1 hunks)plugins/local-llm/js/bindings.gen.ts(2 hunks)plugins/local-llm/permissions/autogenerated/commands/list_supported_model.toml(1 hunks)plugins/local-llm/permissions/autogenerated/reference.md(2 hunks)plugins/local-llm/permissions/default.toml(1 hunks)plugins/local-llm/permissions/schemas/schema.json(2 hunks)plugins/local-llm/src/commands.rs(2 hunks)plugins/local-llm/src/lib.rs(1 hunks)plugins/local-llm/src/model.rs(1 hunks)plugins/local-stt/build.rs(1 hunks)plugins/local-stt/js/bindings.gen.ts(2 hunks)plugins/local-stt/permissions/autogenerated/commands/get_servers.toml(1 hunks)plugins/local-stt/permissions/autogenerated/commands/list_pro_models.toml(1 hunks)plugins/local-stt/permissions/autogenerated/reference.md(3 hunks)plugins/local-stt/permissions/default.toml(1 hunks)plugins/local-stt/permissions/schemas/schema.json(3 hunks)plugins/local-stt/src/commands.rs(3 hunks)plugins/local-stt/src/error.rs(1 hunks)plugins/local-stt/src/ext.rs(8 hunks)plugins/local-stt/src/lib.rs(1 hunks)plugins/local-stt/src/server/external.rs(1 hunks)plugins/local-stt/src/server/internal.rs(1 hunks)plugins/local-stt/src/server/mod.rs(1 hunks)plugins/windows/src/ext.rs(1 hunks)
💤 Files with no reviewable changes (3)
- apps/desktop/src/components/settings/components/index.ts
- apps/desktop/src/components/settings/components/wer-modal.tsx
- apps/desktop/src/components/settings/components/ai/stt-view.tsx
🚧 Files skipped from review as they are similar to previous changes (33)
- plugins/local-stt/permissions/autogenerated/commands/get_servers.toml
- apps/desktop/src/components/license.tsx
- crates/am/Cargo.toml
- plugins/local-stt/src/lib.rs
- crates/am/src/model.rs
- plugins/local-llm/src/lib.rs
- apps/desktop/src/components/toast/model-select.tsx
- plugins/local-stt/permissions/default.toml
- apps/desktop/src/components/welcome-modal/model-selection-view.tsx
- apps/desktop/src/components/welcome-modal/download-progress-view.tsx
- apps/desktop/src/components/settings/views/ai-stt.tsx
- plugins/local-stt/build.rs
- plugins/windows/src/ext.rs
- apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx
- plugins/local-llm/permissions/autogenerated/commands/list_supported_model.toml
- plugins/listener/src/fsm.rs
- apps/desktop/src/components/toast/model-download.tsx
- apps/desktop/src/components/settings/components/ai/shared.tsx
- plugins/local-stt/src/commands.rs
- plugins/local-stt/js/bindings.gen.ts
- plugins/local-llm/build.rs
- plugins/local-stt/permissions/schemas/schema.json
- plugins/local-llm/permissions/autogenerated/reference.md
- plugins/local-stt/permissions/autogenerated/commands/list_pro_models.toml
- apps/desktop/src/components/settings/components/types.ts
- plugins/local-llm/permissions/default.toml
- apps/desktop/src/routes/app.settings.tsx
- apps/desktop/src/components/settings/components/ai/stt-view-local.tsx
- apps/desktop/src/components/settings/components/tab-icon.tsx
- plugins/local-stt/permissions/autogenerated/reference.md
- plugins/local-llm/js/bindings.gen.ts
- apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx
- apps/desktop/src/components/settings/components/ai/llm-local-view.tsx
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit Configuration File
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
crates/am/src/client.rsplugins/local-stt/src/error.rsapps/desktop/src/components/settings/views/index.tsplugins/local-stt/src/server/internal.rscrates/am/src/lib.rsplugins/local-stt/src/server/mod.rsplugins/local-stt/src/server/external.rsplugins/local-stt/src/ext.rsapps/desktop/src/hooks/use-license.tsplugins/local-llm/src/model.rsapps/desktop/src/components/settings/views/ai-llm.tsxplugins/local-llm/src/commands.rs
🧬 Code Graph Analysis (7)
plugins/local-stt/src/server/internal.rs (1)
plugins/local-stt/src/server/external.rs (1)
terminate(8-12)
crates/am/src/lib.rs (1)
crates/am/src/client.rs (2)
new(13-18)new(128-143)
plugins/local-stt/src/server/external.rs (1)
plugins/local-stt/src/server/internal.rs (2)
terminate(55-58)run_server(61-88)
plugins/local-stt/src/ext.rs (2)
plugins/local-stt/src/commands.rs (1)
get_servers(120-124)crates/am/src/client.rs (2)
new(13-18)new(128-143)
plugins/local-llm/src/model.rs (3)
plugins/local-llm/js/bindings.gen.ts (2)
SupportedModel(59-59)ModelInfo(58-58)plugins/local-stt/js/bindings.gen.ts (1)
ModelInfo(64-64)crates/am/src/model.rs (1)
model_size(47-53)
apps/desktop/src/components/settings/views/ai-llm.tsx (4)
apps/desktop/src/components/settings/components/ai/shared.tsx (1)
LLMModel(46-53)plugins/local-llm/js/bindings.gen.ts (1)
SupportedModel(59-59)packages/ui/src/components/ui/tabs.tsx (2)
TabsList(53-53)Tabs(53-53)apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)
LLMLocalView(11-136)
plugins/local-llm/src/commands.rs (2)
plugins/local-llm/js/bindings.gen.ts (2)
ModelInfo(58-58)SupportedModel(59-59)plugins/local-llm/src/model.rs (1)
model_size(36-42)
🪛 GitHub Actions: .github/workflows/fmt.yaml
plugins/local-llm/src/model.rs
[error] 1-8: dprint check failed. Found 1 not formatted file. Command: ~/.dprint/bin/dprint check --config 'dprint.json'.
🔇 Additional comments (28)
crates/am/src/lib.rs (1)
3-9: Re-exporting model module looks consistentAdding mod model; and pub use model::*; aligns with existing export style (client, error, types). No unused items introduced.
crates/am/src/client.rs (4)
7-10: Client rename: struct definition LGTMStruct visibility, fields, and Clone derive remain correct. No unused imports/fields detected.
12-26: Client impl: no functional changes detectedConstructor signatures and behavior unchanged aside from the type name. Consistent with prior API.
201-205: Default impl: LGTMKeeps the same default base URL; consistent and clear.
7-205: No remainingAmClientreferences detected
I searched the entire repository (including alllib.rsfiles) forAmClientand found no occurrences.
All references have been updated or aliased appropriately.apps/desktop/src/hooks/use-license.ts (3)
6-7: Good centralization of TTL across activate/refresh.Consistent LICENSE_TTL_SECONDS in both validate paths reduces drift.
Also applies to: 35-36, 72-73
101-101: AllshouldRefreshreferences have been removed
Ranrg -n --hidden --no-ignore -S "shouldRefresh\(|shouldRefresh\b", and no matches were found. The API rename togetLicenseStatusis fully applied across the codebase.
21-23: No redundant polling vs. manual refresh
The 5-minutegetLicensequery only reads the local license state viakeygen.getLicense, while theLicenseRefreshProviderkicks off a network-backedrefreshLicensecall on a 30-minute interval (with a 1-hour rate limit) only when the license is invalid or nearing expiry. These are distinct, non-overlapping flows and will not duplicate work.plugins/local-stt/src/server/mod.rs (1)
4-6: Deriving Copy/Hash is appropriate for map keys and lightweight enumsGood call deriving Clone, Copy, PartialEq, Eq, and Hash. This makes ServerType usable as a HashMap key and cheap to pass by value. No concerns.
plugins/local-stt/src/error.rs (1)
21-26: No staleBinaryNotFoundreferences remain; all call sites have been updated to useAmBinaryNotFound,AmApiKeyNotSet, andServerAlreadyRunning.
- Verified that
BinaryNotFoundno longer appears anywhere in the codebase.- Error variants in
plugins/local-stt/src/error.rsand allcrate::Error::…usages inplugins/local-stt/src/ext.rshave been updated.plugins/local-stt/src/server/internal.rs (1)
51-52: Make shutdown private: good encapsulationHiding the shutdown sender prevents misuse and pushes callers toward terminate(). Looks good.
plugins/local-stt/src/server/external.rs (2)
18-18: Spawn signature change LGTMUsing the returned child (instead of discarding) aligns with the new lifecycle. Good.
25-27: Handle construction looks consistentapi_base + shutdown + child compose a minimal, self-sufficient handle. No issues.
plugins/local-stt/src/ext.rs (4)
1-1: HashMap import is correct for the new APIMatches the new get_servers surface.
29-31: New get_servers API is a good additionPublic extension returning endpoints per server type is useful for discovery and aligns with ServerType’s new Hash/Eq traits.
156-161: AM key retrieval and error surfacing look goodok_or(AmApiKeyNotSet) keeps the error path explicit. No issues.
247-264: get_servers implementation is clean and efficientLeverages new ServerType traits and returns a compact 2-entry map. LGTM.
plugins/local-llm/permissions/schemas/schema.json (2)
381-392: Confirmed:allow-list-supported-modelpermission is fully integratedEverything is in place for the new
list_supported_modelcommand:
- plugins/local-llm/permissions/default.toml includes
"allow-list-supported-model"in the default permission set.- JSON schema declares both
allow-list-supported-modelanddeny-list-supported-modelwith matching descriptions.- Autogenerated reference.md lists the new permissions under
local-llm:allow-list-supported-modelandlocal-llm:deny-list-supported-model.- Autogenerated commands TOML defines the allow/deny rules for
list_supported_model.All default configurations and docs correctly reflect the change.
454-458: Default permission documentation verified
Theallow-list-supported-modelentry is present inschema.jsonand inplugins/local-llm/permissions/default.toml. No further changes needed.plugins/local-llm/src/commands.rs (2)
1-1: Import update looks correct.Bringing
ModelInfoandSupportedModelinto scope matches the new return type and payload.
13-33: No action required:ModelInfo.keyis already the correctSupportedModelunion typeThe generated TypeScript binding in
plugins/local-llm/js/bindings.gen.tsshows:
export type ModelInfo = { key: SupportedModel; name: string; description: string; size_bytes: number }export type SupportedModel = "Llama3p2_3bQ4" | "HyprLLM" | "Gemma3_4bQ4"Since
ModelInfo.keyis already typed asSupportedModel, no changes are needed.plugins/local-llm/src/model.rs (2)
4-10: ModelInfo struct is appropriate for frontend consumption.Fields and derives look right for Specta/Serde. No issues.
20-33: New variant mappings look consistent (filename/URL/size/checksum).URLs and sizes are defined; checksum method addition is fine for integrity checks if used later.
Please ensure any downloader/validator (if present) uses
model_checksum()for integrity to avoid silent corruption.Also applies to: 36-50
apps/desktop/src/components/settings/views/ai-llm.tsx (5)
29-29: Import toast helper (OK).Used by the download flow; no unused imports.
114-121: Component/state scaffolding for tab split (OK).
activeTabandllmModelsStateare introduced cleanly with minimal changes to downstream props.
159-161: Thin wrapper for model download (OK).Keeps intent clear and reuses shared flow.
613-621: Tabs Local/Remote wiring is consistent with the new split.Clean 2-tab UI and translation hooks; no issues.
626-716: Remote view: autonomy control placement makes sense behind custom endpoint.The gating on
customLLMEnabled.datais clear. Minimal comments; aligns with guideline.
| useEffect(() => { | ||
| localLlmCommands.listSupportedModel().then((ms) => { | ||
| const models: LLMModel[] = ms.map((model) => ({ | ||
| key: model.key as SupportedModel, | ||
| name: model.name, | ||
| description: model.description, | ||
| available: true, | ||
| downloaded: false, | ||
| size: `${(model.size_bytes / 1024 / 1024 / 1024).toFixed(2)} GB`, | ||
| })); | ||
|
|
||
| setLlmModels(models); | ||
| }); | ||
| }, []); |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Dynamic model list on mount is good; align TS type to avoid casts.
Mapping the backend ModelInfo to UI state is correct. You currently cast model.key as SupportedModel; prefer having bindings emit key: SupportedModel to drop the cast.
🏁 Script executed:
#!/bin/bash
rg -n $'export\\s+type\\s+ModelInfo\\s*=\\s*\\{\\s*key:\\s*(SupportedModel|string);' plugins/local-llm/js/bindings.gen.ts -n -A 0Length of output: 229
🏁 Script executed:
#!/bin/bash
# Locate the definition of LLMModel to confirm its key type
rg -n "export type LLMModel" -A 5Length of output: 35
🏁 Script executed:
#!/bin/bash
# Search for LLMModel type alias or interface in the repository
rg -n "export (type|interface) LLMModel" -n
# Find where LLMModel is imported in the ai-llm.tsx file to locate its definition
rg -n "import .*LLMModel" -n apps/desktop/src/components/settings/views/ai-llm.tsxLength of output: 223
🏁 Script executed:
#!/bin/bash
rg -n "export interface LLMModel" -A 10 apps/desktop/src/components/settings/components/ai/shared.tsxLength of output: 333
Drop redundant as SupportedModel cast
Since ModelInfo.key is already typed as SupportedModel, you can remove the unnecessary cast in your mapping:
• File: apps/desktop/src/components/settings/views/ai-llm.tsx
const models: LLMModel[] = ms.map((model) => ({
- key: model.key as SupportedModel,
+ key: model.key,
name: model.name,
description: model.description,
available: true,
downloaded: false,
size: `${(model.size_bytes / 1024 / 1024 / 1024).toFixed(2)} GB`,
}));No changes are needed in the generated bindings—this simply cleans up the UI code.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| localLlmCommands.listSupportedModel().then((ms) => { | |
| const models: LLMModel[] = ms.map((model) => ({ | |
| key: model.key as SupportedModel, | |
| name: model.name, | |
| description: model.description, | |
| available: true, | |
| downloaded: false, | |
| size: `${(model.size_bytes / 1024 / 1024 / 1024).toFixed(2)} GB`, | |
| })); | |
| setLlmModels(models); | |
| }); | |
| }, []); | |
| useEffect(() => { | |
| localLlmCommands.listSupportedModel().then((ms) => { | |
| const models: LLMModel[] = ms.map((model) => ({ | |
| key: model.key, | |
| name: model.name, | |
| description: model.description, | |
| available: true, | |
| downloaded: false, | |
| size: `${(model.size_bytes / 1024 / 1024 / 1024).toFixed(2)} GB`, | |
| })); | |
| setLlmModels(models); | |
| }); | |
| }, []); |
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/views/ai-llm.tsx around lines 122 to
135, the mapping over ModelInfo objects includes a redundant "as SupportedModel"
cast on model.key; remove the unnecessary cast so you assign key: model.key
directly (since ModelInfo.key is already typed as SupportedModel) and leave the
rest of the mapping unchanged.
| .state::<crate::SharedState>() | ||
| .lock() | ||
| .await | ||
| .external_server | ||
| .is_some() | ||
| { | ||
| return Err(crate::Error::ServerAlreadyRunning); | ||
| } | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Same race for external start
The external branch has the same check-then-set race. Apply the same serialization approach as above.
🤖 Prompt for AI Agents
In plugins/local-stt/src/ext.rs around lines 146 to 154, the code does a
separate check for whether external_server is Some(), creating the same
check-then-set race as the other branch; fix it by performing the check and the
setting of external_server while holding the same SharedState lock in one atomic
section (i.e., acquire the lock, if external_server.is_some() return
ServerAlreadyRunning, otherwise set external_server = Some(...) before releasing
the lock) so that the check-and-set are serialized and no race can occur; if
creating the server requires async work, create the server value first but only
insert it into shared state inside the lock (or use a placeholder/handle) so the
mutation remains atomic.
There was a problem hiding this comment.
Actionable comments posted: 2
🔭 Outside diff range comments (4)
apps/desktop/src/locales/en/messages.po (1)
1-15: Set explicit plural rules (required for correct pluralization).Plural-Forms is empty. Define explicit rules for English to avoid pluralization mishandling at runtime.
Apply this diff in the header:
"Language: en\n" -"Plural-Forms: \n" +"Plural-Forms: nplurals=2; plural=(n != 1);\n"apps/desktop/src/locales/ko/messages.po (3)
1-15: Set explicit plural rules for Korean (prevents pluralization errors).Plural-Forms is empty. Korean requires a single plural form.
Apply this diff:
"Language: ko\n" -"Plural-Forms: \n" +"Plural-Forms: nplurals=1; plural=0;\n"
741-743: Fix untranslated string (“Enable Integration”).This remains in English. Provide a Korean translation.
-msgstr "Enable Integration" +msgstr "연동 활성화"
1-1691: Review translation placeholders and coverage in ko/messages.po
- Untranslated entries: 260
msgstr ""entries remain, causing heavy fallback to English.- Placeholder mismatches: 24 instances where the source
msgidcontains tokens (e.g.{days},{hours},{0},{description}) but the correspondingmsgstris empty, dropping those placeholders entirely.Action items:
- Provide Korean translations for all
msgidentries.- Ensure each translation preserves all placeholder tokens exactly as in the source.
♻️ Duplicate comments (3)
apps/desktop/src/locales/en/messages.po (1)
966-973: Prune obsolete LLM/STT keys (#~ entries) from the catalog.These legacy entries (“LLM - Local/Custom”, “Local Models”, “Speech to Text Model”, “Whisper Model Language Performance (WER)”, etc.) are still present as obsolete (#~). They add noise and bloat to catalogs.
Run to verify remaining obsolete entries:
#!/bin/bash # Count obsolete strings in en catalog rg -n '^#~ msgid ' apps/desktop/src/locales/en/messages.po | wc -l # List a few representative ones rg -n '^#~ msgid ' apps/desktop/src/locales/en/messages.po | head -n 20Also applies to: 1003-1006, 1417-1420, 1651-1657
apps/desktop/src/locales/ko/messages.po (1)
966-973: Prune obsolete LLM/STT keys (#~ entries) from ko catalog as well.Mirror the cleanup in the Korean catalog to remove deprecated strings (LLM - Local/Custom, Local Models, Speech to Text Model, WER modal, etc.) to keep catalogs lean.
Verify remaining obsolete entries:
#!/bin/bash # Count obsolete strings in ko catalog rg -n '^#~ msgid ' apps/desktop/src/locales/ko/messages.po | wc -l rg -n '^#~ msgid ' apps/desktop/src/locales/ko/messages.po | head -n 20Also applies to: 1003-1006, 1417-1420, 1651-1657
plugins/local-llm/src/model.rs (1)
1-5: Resolved: Gemma3_4bQ4 added to SUPPORTED_MODELS.This addresses the earlier omission; enumeration flows will now surface Gemma.
🧹 Nitpick comments (5)
apps/desktop/src/locales/en/messages.po (2)
269-275: Avoid generic “{0}” message; prefer named placeholders or explicit IDs.A bare "{0}" reused across contexts is brittle for translators and QA. Prefer an explicit ID with a meaningful message and named variables (e.g., {label}) or pass the already-localized string directly.
If keeping this pattern, ensure all usages always supply a fully localized value to {0}, not raw English fragments.
994-998: Disambiguate “Local” and “Remote” or add translator comments.These labels appear in multiple AI settings tabs. To reduce ambiguity in non-English locales, consider:
- Adding explicit IDs (js-lingui-explicit-id) tied to “processing path” context, or
- Adding translator notes in code (/* translators: Local processing path */).
Also applies to: 1252-1256
plugins/local-llm/src/model.rs (3)
7-13: Derive Clone and Debug for ModelInfo for ergonomics and logging.Commonly passed across layers and useful in logs; no runtime cost.
-#[derive(serde::Serialize, serde::Deserialize, specta::Type)] +#[derive(serde::Serialize, serde::Deserialize, specta::Type, Clone, Debug)] pub struct ModelInfo {
23-28: Normalize file_name convention across models (nit).Llama/Hypr use canonical short names, Gemma uses the remote basename. Consider a consistent convention to avoid user confusion and duplicate downloads if names change upstream.
39-45: Readability: add numeric separators to byte constants (nit).Improves maintainability when scanning large literals.
- SupportedModel::Llama3p2_3bQ4 => 2019377440, - SupportedModel::HyprLLM => 1107409056, - SupportedModel::Gemma3_4bQ4 => 2489894016, + SupportedModel::Llama3p2_3bQ4 => 2_019_377_440, + SupportedModel::HyprLLM => 1_107_409_056, + SupportedModel::Gemma3_4bQ4 => 2_489_894_016,
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx(1 hunks)apps/desktop/src/locales/en/messages.po(35 hunks)apps/desktop/src/locales/ko/messages.po(35 hunks)plugins/local-llm/src/model.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/desktop/src/components/settings/components/ai/stt-view-local.tsx
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit Configuration File
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/local-llm/src/model.rs
🧬 Code Graph Analysis (1)
plugins/local-llm/src/model.rs (2)
plugins/local-llm/js/bindings.gen.ts (2)
SupportedModel(59-59)ModelInfo(58-58)crates/am/src/model.rs (1)
model_size(47-53)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (2)
plugins/local-llm/src/model.rs (2)
15-20: Enum extension looks consistent and exhaustively matched.Variant added and all match arms updated accordingly.
31-36: Verify remote URLs and model sizesThe automated check only confirmed the HyprLLM URL. Please manually verify that the other two endpoints return HTTP 200 and their Content-Length headers match the expected model sizes to avoid integrity mismatches:
• File: plugins/local-llm/src/model.rs (lines 31–36)
– Llama3p2_3bQ4: expect HTTP 200, Content-Length = 2019377440
– Gemma3_4bQ4: expect HTTP 200, Content-Length = 2489894016If any endpoint is unreachable or the size differs, update the URL or the
model_sizeaccordingly.
| #: src/routes/app.settings.tsx:58 | ||
| msgid "Calendar" | ||
| msgstr "" | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Provide minimal high-impact translations to avoid English fallback in the new AI settings UI.
These are core navigation/AI settings strings now exposed prominently. Suggested translations:
-#: src/routes/app.settings.tsx:58
-msgid "Calendar"
-msgstr ""
+msgid "Calendar"
+msgstr "캘린더"
-#: src/routes/app.settings.tsx:52
-msgid "General"
-msgstr ""
+msgid "General"
+msgstr "일반"
-#: src/components/settings/views/integrations.tsx:118
-msgid "Integrations"
-msgstr ""
+msgid "Integrations"
+msgstr "연동"
-#: src/routes/app.settings.tsx:60
-msgid "Notifications"
-msgstr ""
+msgid "Notifications"
+msgstr "알림"
-#: src/routes/app.settings.tsx:64
-msgid "Sound"
-msgstr ""
+msgid "Sound"
+msgstr "소리"
-#: src/routes/app.settings.tsx:56
-msgid "Transcription"
-msgstr ""
+msgid "Transcription"
+msgstr "음성 인식"
-#: src/routes/app.settings.tsx:62
-msgid "Templates"
-msgstr ""
+msgid "Templates"
+msgstr "템플릿"
-#: src/routes/app.settings.tsx:70
-msgid "License"
-msgstr ""
+msgid "License"
+msgstr "라이선스"
-#: src/components/left-sidebar/top-area/settings-button.tsx:101
-msgid "Settings"
-msgstr ""
+msgid "Settings"
+msgstr "설정"
-#: src/components/settings/views/ai-stt.tsx:63
-#: src/components/settings/views/ai-llm.tsx:617
-msgid "Local"
-msgstr ""
+msgid "Local"
+msgstr "로컬"
-#: src/components/settings/views/ai-stt.tsx:66
-#: src/components/settings/views/ai-llm.tsx:620
-msgid "Remote"
-msgstr ""
+msgid "Remote"
+msgstr "원격"
-#: src/components/welcome-modal/index.tsx:350
-#: src/components/welcome-modal/index.tsx:361
-msgid "Back"
-msgstr ""
+msgid "Back"
+msgstr "뒤로"
-#: src/components/welcome-modal/download-progress-view.tsx:258
-msgid "Continue"
-msgstr ""
+msgid "Continue"
+msgstr "계속"
-#: src/components/welcome-modal/welcome-view.tsx:36
-msgid "Get Started"
-msgstr ""
+msgid "Get Started"
+msgstr "시작하기"
-#: src/components/settings/components/ai/llm-custom-view.tsx:569
-msgid "API Base URL"
-msgstr ""
+msgid "API Base URL"
+msgstr "API 기본 URL"
-#: src/components/settings/components/ai/llm-custom-view.tsx:277
-#: src/components/settings/components/ai/llm-custom-view.tsx:373
-#: src/components/settings/components/ai/llm-custom-view.tsx:479
-#: src/components/settings/components/ai/llm-custom-view.tsx:591
-msgid "API Key"
-msgstr ""
+msgid "API Key"
+msgstr "API 키"
-#: src/components/settings/components/ai/llm-custom-view.tsx:297
-#: src/components/settings/components/ai/llm-custom-view.tsx:393
-#: src/components/settings/components/ai/llm-custom-view.tsx:499
-msgid "Model"
-msgstr ""
+msgid "Model"
+msgstr "모델"
-#: src/components/settings/components/ai/llm-custom-view.tsx:616
-msgid "Model Name"
-msgstr ""
+msgid "Model Name"
+msgstr "모델 이름"
-#: src/components/settings/views/ai-llm.tsx:642
-msgid "Autonomy Selector"
-msgstr ""
+msgid "Autonomy Selector"
+msgstr "자율성 설정"
-#: src/components/settings/views/ai-llm.tsx:661
-msgid "Control how autonomous the AI enhancement should be"
-msgstr ""
+msgid "Control how autonomous the AI enhancement should be"
+msgstr "AI 향상의 자율성 수준을 조절합니다"
-#: src/components/settings/views/ai-llm.tsx:656
-msgid "Learn more about AI autonomy"
-msgstr ""
+msgid "Learn more about AI autonomy"
+msgstr "AI 자율성에 대해 자세히 알아보기"
-#: src/components/settings/components/ai/llm-custom-view.tsx:628
-msgid "Loading available models..."
-msgstr ""
+msgid "Loading available models..."
+msgstr "사용 가능한 모델 불러오는 중..."
-#: src/components/welcome-modal/custom-endpoint-view.tsx:550
-msgid "Loading models..."
-msgstr ""
+msgid "Loading models..."
+msgstr "모델 불러오는 중..."
-#: src/components/settings/views/templates.tsx:240
-msgid "Loading templates..."
-msgstr ""
+msgid "Loading templates..."
+msgstr "템플릿 불러오는 중..."
-#: src/components/settings/components/ai/llm-custom-view.tsx:619
-msgid "Select a model from the dropdown (if available) or manually enter the model name required by your endpoint."
-msgstr ""
+msgid "Select a model from the dropdown (if available) or manually enter the model name required by your endpoint."
+msgstr "드롭다운에서 모델을 선택하거나(가능한 경우) 엔드포인트에서 요구하는 모델 이름을 직접 입력하세요."Also applies to: 825-828, 893-897, 1136-1139, 1413-1416, 1542-1545, 1486-1489, 954-957, 1385-1388, 994-998, 1252-1256, 446-450, 610-612, 833-836, 400-404, 410-416, 1033-1041, 1042-1046, 442-445, 617-620, 942-945, 974-977, 982-985, 986-989, 1333-1336
🤖 Prompt for AI Agents
In apps/desktop/src/locales/ko/messages.po around lines 472-475, the msgid
"Calendar" has an empty msgstr causing English fallback; add the appropriate
Korean translation (e.g., "캘린더") as msgstr. Repeat this for the other referenced
ranges (825-828, 893-897, 1136-1139, 1413-1416, 1542-1545, 1486-1489, 954-957,
1385-1388, 994-998, 1252-1256, 446-450, 610-612, 833-836, 400-404, 410-416,
1033-1041, 1042-1046, 442-445, 617-620, 942-945, 974-977, 982-985, 986-989,
1333-1336) by filling each empty msgstr with the minimal high-impact Korean
translations for the corresponding msgids used in the AI settings UI to prevent
English fallback.
| pub fn model_checksum(&self) -> u64 { | ||
| match self { | ||
| SupportedModel::Llama3p2_3bQ4 => 2831308098, | ||
| SupportedModel::HyprLLM => 4037351144, | ||
| SupportedModel::Gemma3_4bQ4 => 2760830291, | ||
| } | ||
| } |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Checksum: document algorithm and verify it’s used; otherwise remove.
State the algorithm/endian (e.g., xxh3_64, crc32, sha256-truncated) and ensure download verification consumes it; if unused, it violates “no unused functions.”
Doc comment suggestion:
impl SupportedModel {
- pub fn model_checksum(&self) -> u64 {
+ /// Why: stable integrity check for downloaded model binaries; must match the downloader/verifier.
+ pub fn model_checksum(&self) -> u64 {
match self {Search for usage:
If no call sites are found in downloader/verification paths, either wire it in or drop it.
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n "model_checksum" -A 3 -B 3Length of output: 478
Action Required: Integrate or remove unused model_checksum
The model_checksum method in plugins/local-llm/src/model.rs (lines 47–53) isn’t referenced anywhere in the codebase and thus is dead code. You should either:
- Wire it into your download/verification logic (e.g., in your downloader module or wherever you validate model binaries) so the checksum actually enforces integrity, or
- Remove the method entirely to eliminate unused code.
Suggested doc-comment if you choose to keep and integrate it:
impl SupportedModel {
+ /// Returns the 64-bit xxh3 checksum (little-endian) for verifying downloaded model binaries.
pub fn model_checksum(&self) -> u64 {
match self {
SupportedModel::Llama3p2_3bQ4 => 2831308098,
SupportedModel::HyprLLM => 4037351144,
SupportedModel::Gemma3_4bQ4 => 2760830291,
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn model_checksum(&self) -> u64 { | |
| match self { | |
| SupportedModel::Llama3p2_3bQ4 => 2831308098, | |
| SupportedModel::HyprLLM => 4037351144, | |
| SupportedModel::Gemma3_4bQ4 => 2760830291, | |
| } | |
| } | |
| impl SupportedModel { | |
| /// Returns the 64-bit xxh3 checksum (little-endian) for verifying downloaded model binaries. | |
| pub fn model_checksum(&self) -> u64 { | |
| match self { | |
| SupportedModel::Llama3p2_3bQ4 => 2831308098, | |
| SupportedModel::HyprLLM => 4037351144, | |
| SupportedModel::Gemma3_4bQ4 => 2760830291, | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In plugins/local-llm/src/model.rs around lines 47 to 53, the model_checksum
method is unused dead code; either remove it or wire it into the model
download/verification flow. If removing: delete the method and any references to
it (none currently) and run a build. If keeping: update the downloader/installer
to call SupportedModel::model_checksum() after download, compute the downloaded
file's checksum (same u64 algorithm/endianness as these constants), compare and
fail the install if mismatched, and add a short doc-comment above model_checksum
explaining its purpose.
Summary by cubic
Redesigned the AI settings to split LLM and STT configuration into separate tabs, added new local and remote STT views, and improved model management for both LLM and STT.
New Features
Refactors