Skip to content

New ai settings#1313

Merged
yujonglee merged 8 commits intomainfrom
new-ai-settings
Aug 11, 2025
Merged

New ai settings#1313
yujonglee merged 8 commits intomainfrom
new-ai-settings

Conversation

@yujonglee
Copy link
Contributor

@yujonglee yujonglee commented Aug 11, 2025

Summary by cubic

Redesigned the AI settings to split LLM and STT configuration into separate tabs, added new local and remote STT views, and improved model management for both LLM and STT.

  • New Features

    • Separate tabs for LLM and STT settings with new icons.
    • Added local and remote STT configuration views.
    • Improved model listing and metadata for LLM and STT, including support for new models.
    • Enhanced license refresh logic and UI feedback.
  • Refactors

    • Removed old combined STT view and WER modal.
    • Updated plugin commands and permissions for better model and server management.
    • Cleaned up settings code and updated translations.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 11, 2025

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

Splits AI settings into separate LLM and STT views; adds/renames local-llm and local-stt plugin commands and model metadata (including Gemma3_4bQ4); refactors desktop license refresh to a status-driven, throttled schedule; updates bindings, permissions, locales, UI, and server shutdown/termination handling.

Changes

Cohort / File(s) Summary
License refresh throttling
apps/desktop/src/components/license.tsx, apps/desktop/src/hooks/use-license.ts
Replaced shouldRefresh with getLicenseStatus (needsRefresh/isValid); added TTL and scheduling constants, initial delay, interval, rate-limiting refs; effect now delays first check, runs periodic guarded attempts; refetchInterval increased.
LLM settings & custom endpoint discovery
apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx, apps/desktop/src/components/settings/components/ai/llm-local-view.tsx, apps/desktop/src/components/settings/components/ai/shared.tsx
Added debounced api_base/api_key, useQuery + tauriFetch model-discovery for custom endpoints; removed some props (selectedLLMModel/customLLMConnection/getCustomLLMModel/handleShowFileLocation); LLMModel.key typed to SupportedModel; internalized model selection and path-opening.
STT UI split & local management
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx, apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx, apps/desktop/src/components/settings/components/ai/stt-view.tsx (deleted), apps/desktop/src/components/settings/components/wer-modal.tsx (deleted), apps/desktop/src/components/settings/components/index.ts
Added STT local and remote view files and exported sttModelMetadata; removed legacy combined STT view and WER modal; removed wer-modal re-export and adjusted exports.
Settings tabs, icons, and views
apps/desktop/src/components/settings/components/tab-icon.tsx, apps/desktop/src/components/settings/components/types.ts, apps/desktop/src/components/settings/views/ai-llm.tsx, apps/desktop/src/components/settings/views/ai-stt.tsx, apps/desktop/src/components/settings/views/index.ts, apps/desktop/src/routes/app.settings.tsx
Replaced single "ai" tab with "ai-llm" and "ai-stt"; updated tab keys/icons/types; split views into LlmAI and SttAI exports; replaced LocalAI with AILLM/AISTT and updated routing/tab titles.
Toasts & welcome modal adjustments
apps/desktop/src/components/toast/model-download.tsx, apps/desktop/src/components/toast/model-select.tsx, apps/desktop/src/components/welcome-modal/download-progress-view.tsx, apps/desktop/src/components/welcome-modal/model-selection-view.tsx
Included Gemma3_4bQ4 in model-exists checks; model-select toast now navigates to "ai-stt"; retargeted sttModelMetadata imports to stt-view-local.
i18n reorganization
apps/desktop/src/locales/en/messages.po, apps/desktop/src/locales/ko/messages.po
Re-anchored and reorganized many translation entries to reflect Local/Remote split, added keys (Local/Remote, Summarization, Intelligence, Speech to Text Model), and marked many obsolete/fuzzy entries; adjusted a dynamic placeholder.
Local-LLM plugin updates & bindings
plugins/local-llm/build.rs, plugins/local-llm/src/commands.rs, plugins/local-llm/src/lib.rs, plugins/local-llm/src/model.rs, plugins/local-llm/js/bindings.gen.ts, plugins/local-llm/permissions/*
Renamed list_supported_models → list_supported_model; added ModelInfo return type; expanded SupportedModel with Gemma3_4bQ4; added model metadata and checksums; updated Specta/TS bindings and autogenerated permission manifests/schema.
Local-STT plugin updates & bindings
plugins/local-stt/build.rs, plugins/local-stt/src/commands.rs, plugins/local-stt/src/ext.rs, plugins/local-stt/src/lib.rs, plugins/local-stt/src/server/mod.rs, plugins/local-stt/js/bindings.gen.ts, plugins/local-stt/permissions/*
Added commands list_pro_models and get_servers; exposed get_servers in ext API returning endpoint map; derived extra traits for ServerType; added TS bindings for getServers/listProModels; updated permission manifests, defaults, and schema.
AM crate model metadata & client rename
crates/am/Cargo.toml, crates/am/src/lib.rs, crates/am/src/model.rs, crates/am/src/client.rs
Enabled serde/specta derive features, added new model module exporting Model enum and ModelInfo struct with helpers, re-exported model module; renamed AmClient → Client (tests updated).
Listener FSM adjustments
plugins/listener/src/fsm.rs
Removed redemption_time_ms parameter usage and passing; setup_listen_client now uses fixed redemption_time_ms (70 or 500) based on onboarding and no longer threads external redemption_time_ms.
Server handle & termination changes (local-stt)
plugins/local-stt/src/server/external.rs, plugins/local-stt/src/server/internal.rs, plugins/local-stt/src/error.rs
ServerHandle now contains child instead of hypr_am client; shutdown sender made private; added terminate(self) to consume handle and signal shutdown; added error variants (ServerAlreadyRunning, AmBinaryNotFound, AmApiKeyNotSet) and removed BinaryNotFound.
Windows settings window sizing
plugins/windows/src/ext.rs
Settings window set resizable(false) and removed min_inner_size while keeping inner_size 800x600.

Sequence Diagram(s)

sequenceDiagram
  participant LicenseComp as LicenseComponent
  participant Hook as useLicense
  participant Backend as keygen.validateCheckoutKey

  LicenseComp->>Hook: getLicenseStatus()
  Hook-->>LicenseComp: {needsRefresh,isValid}
  LicenseComp->>LicenseComp: setTimeout(INITIAL_DELAY)
  LicenseComp->>LicenseComp: setInterval(REFRESH_INTERVAL)
  LicenseComp->>Hook: attemptRefresh()
  alt needsRefresh && !pending && past RATE_LIMIT
    Hook->>Backend: refreshLicense.mutate()
    Backend-->>Hook: response
  else
    Hook-->>LicenseComp: skip
  end
Loading
sequenceDiagram
  participant Desktop as LLM UI
  participant Bindings as local-llm TS bindings
  participant Plugin as local-llm plugin

  Desktop->>Bindings: listSupportedModel()
  Bindings->>Plugin: invoke list_supported_model
  Plugin-->>Bindings: Vec<ModelInfo>
  Bindings-->>Desktop: ModelInfo[]
  Desktop->>Desktop: render models / selection
Loading
sequenceDiagram
  participant STTView as STT Local UI
  participant Bindings as local-stt TS bindings
  participant Plugin as local-stt plugin

  STTView->>Bindings: getServers()
  Bindings->>Plugin: invoke get_servers
  Plugin-->>Bindings: {Internal: url?, External: url?}
  Bindings-->>STTView: endpoints map
  STTView->>Bindings: listProModels()
  Bindings->>Plugin: invoke list_pro_models
  Plugin-->>Bindings: ModelInfo[]
  Bindings-->>STTView: pro models
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 77e20b9 and 7a91306.

📒 Files selected for processing (7)
  • apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx (1 hunks)
  • apps/desktop/src/components/settings/views/ai-llm.tsx (6 hunks)
  • apps/desktop/src/components/settings/views/ai-stt.tsx (1 hunks)
  • apps/desktop/src/locales/en/messages.po (35 hunks)
  • apps/desktop/src/locales/ko/messages.po (35 hunks)
  • owhisper/owhisper-server/src/commands/run/realtime.rs (1 hunks)
  • plugins/listener/src/fsm.rs (3 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch new-ai-settings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cubic analysis

No issues found across 10 files. Review in cubic

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🔭 Outside diff range comments (9)
apps/desktop/src/components/toast/model-download.tsx (1)

26-42: Harden query enablement to avoid undefined access due to non-null assertions

checkForModelDownload only enables on STT model, but uses both STT and LLM; the two download-state queries can also run before their current model values exist. This can invoke commands with undefined. Enable queries only when their required inputs are present.

   const checkForModelDownload = useQuery({
-    enabled: !!currentSttModel.data,
+    enabled: !!(currentSttModel.data && currentLlmModel.data),
     queryKey: ["check-model-downloaded"],
     queryFn: async () => {
       const [stt, llm] = await Promise.all([
         localSttCommands.isModelDownloaded(currentSttModel.data!),
         localLlmCommands.isModelDownloaded(currentLlmModel.data!),
       ]);
       return {
         currentSttModel,
         sttModelDownloaded: stt,
         llmModelDownloaded: llm,
       };
     },
     refetchInterval: 5000,
   });

   const sttModelDownloading = useQuery({
-    enabled: !checkForModelDownload.data?.sttModelDownloaded,
+    enabled: !!currentSttModel.data && !checkForModelDownload.data?.sttModelDownloaded,
     queryKey: ["stt-model-downloading"],
     queryFn: async () => {
       return localSttCommands.isModelDownloading(currentSttModel.data!);
     },
     refetchInterval: 3000,
   });

   const llmModelDownloading = useQuery({
-    enabled: !checkForModelDownload.data?.llmModelDownloaded,
+    enabled: !!currentLlmModel.data && !checkForModelDownload.data?.llmModelDownloaded,
     queryKey: ["llm-model-downloading"],
     queryFn: async () => {
       return localLlmCommands.isModelDownloading(currentLlmModel.data!);
     },
     refetchInterval: 3000,
   });

Also applies to: 44-51, 53-60

plugins/listener/src/fsm.rs (2)

584-585: Remove sensitive api_key from logs and log the actual API base

Logging tokens is a leak. Also, the logged api_base isn’t the one used after hardcoding.

-    tracing::info!(api_base = ?api_base, api_key = ?api_key, languages = ?languages, "listen_client");
+    tracing::info!(api_base = "http://localhost:50060", languages = ?languages, "listen_client");

If you keep a variable for the base, prefer logging a redacted flag (e.g., api_key_present: api_key.is_empty() == false) instead of the token itself.


571-576: Avoid hardcoding the STT server URL; prefer discovery (with a safe fallback)

Hardcoding “http://localhost:50060” couples the listener to one deployment shape and ignores the app’s connection data. With the new local-stt get_servers command available, you can discover the running endpoint and fall back to localhost if missing.

-    let api_base = {
-        use tauri_plugin_connector::{Connection, ConnectorPluginExt};
-        let conn: Connection = app.get_stt_connection().await?.into();
-        conn.api_base
-    };
+    // Prefer discovered internal server; fallback to default local port.
+    #[allow(unused)]
+    let api_base: String = {
+        #[cfg(feature = "local-stt")]
+        {
+            use tauri_plugin_local_stt::{server::ServerType, LocalSttPluginExt};
+            app.get_servers()
+                .await
+                .ok()
+                .and_then(|m| m.get(&ServerType::Internal).cloned().flatten())
+                .unwrap_or_else(|| "http://localhost:50060".to_string())
+        }
+        #[cfg(not(feature = "local-stt"))]
+        {
+            "http://localhost:50060".to_string()
+        }
+    };
@@
-    Ok(owhisper_client::ListenClient::builder()
-        .api_base("http://localhost:50060")
+    Ok(owhisper_client::ListenClient::builder()
+        .api_base(&api_base)
         .api_key(api_key)
         .params(owhisper_interface::ListenParams {
             languages,
             static_prompt,
             redemption_time_ms: if is_onboarding { 70 } else { 500 },
             ..Default::default()
         })
         .build_dual())

If wiring the local-stt feature is out of scope here, at least keep the get_stt_connection result and use its api_base instead of a hardcoded URL.

Also applies to: 589-597

apps/desktop/src/components/welcome-modal/download-progress-view.tsx (1)

172-192: Remove in-component error handling (guideline: no error handling in TSX)

Per coding guidelines, drop try/catch here. Let errors propagate to global handlers or rely on channel/error state already present.

Apply:

-    const handleSttCompletion = async () => {
-      if (sttDownload.completed) {
-        try {
-          await localSttCommands.setCurrentModel(selectedSttModel);
-          await localSttCommands.startServer(null);
-        } catch (error) {
-          console.error("Error setting up STT:", error);
-        }
-      }
-    };
+    const handleSttCompletion = async () => {
+      if (sttDownload.completed) {
+        await localSttCommands.setCurrentModel(selectedSttModel);
+        await localSttCommands.startServer(null);
+      }
+    };
@@
-    const handleLlmCompletion = async () => {
-      if (llmDownload.completed && llmSelection === "hyprllm") {
-        try {
-          await localLlmCommands.setCurrentModel("HyprLLM");
-          await localLlmCommands.startServer();
-        } catch (error) {
-          console.error("Error setting up LLM:", error);
-        }
-      }
-    };
+    const handleLlmCompletion = async () => {
+      if (llmDownload.completed && llmSelection === "hyprllm") {
+        await localLlmCommands.setCurrentModel("HyprLLM");
+        await localLlmCommands.startServer();
+      }
+    };
apps/desktop/src/components/settings/components/ai/shared.tsx (1)

47-53: Align surrounding props with SupportedModel for stronger typing.

LLMModel.key is now SupportedModel, but related props remain string-typed. Recommend tightening:

 export interface SharedLLMProps {
   // Core State
   customLLMEnabled: UseQueryResult<boolean>;
-  selectedLLMModel: string;
-  setSelectedLLMModel: (model: string) => void;
+  selectedLLMModel: SupportedModel;
+  setSelectedLLMModel: (model: SupportedModel) => void;

   // Critical Mutations
   setCustomLLMEnabledMutation: UseMutationResult<null, Error, boolean, unknown>;

   // Model State
   downloadingModels: Set<string>;
   llmModelsState: LLMModel[];

   // Functions
-  handleModelDownload: (modelKey: string) => Promise<void>;
+  handleModelDownload: (modelKey: SupportedModel) => Promise<void>;
 }

If applicable, also revisit any query types (e.g., getCustomLLMModel) to return SupportedModel | null for consistency.

apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx (3)

172-176: Avoid leaking API key fragments in queryKey and tighten enabled condition

Including a slice of the API key in the React Query key exposes sensitive info in cache/devtools. Also, enabled should require a trimmed key for remote endpoints.

Apply this diff:

-    queryKey: ["others-direct-models", debouncedApiBase, debouncedApiKey?.slice(0, 8)],
+    // Don't include secrets in the cache key
+    queryKey: ["others-direct-models", debouncedApiBase, Boolean(debouncedApiKey && debouncedApiKey.trim())],
@@
-    enabled: (() => {
-      const isLocal = debouncedApiBase?.includes("localhost") || debouncedApiBase?.includes("127.0.0.1");
-
-      try {
-        return Boolean(debouncedApiBase && new URL(debouncedApiBase) && (isLocal || debouncedApiKey));
-      } catch {
-        return false;
-      }
-    })(),
+    enabled: (() => {
+      const isLocal = debouncedApiBase?.includes("localhost") || debouncedApiBase?.includes("127.0.0.1");
+      try {
+        const hasValidKey = Boolean(debouncedApiKey && debouncedApiKey.trim().length > 0);
+        return Boolean(debouncedApiBase && new URL(debouncedApiBase) && (isLocal || hasValidKey));
+      } catch {
+        return false;
+      }
+    })(),

Also applies to: 215-224


163-170: Incorrect use of customForm.watch in deps; subscribe instead to avoid re-running and stale reads

Calling watch() in the deps array re-evaluates on every render and can cause inconsistent behavior. Subscribe once and use the callback.

-  useEffect(() => {
-    const apiBase = customForm.watch("api_base");
-    const apiKey = customForm.watch("api_key");
-
-    updateDebouncedValues(apiBase || "", apiKey || "");
-  }, [customForm.watch("api_base"), customForm.watch("api_key"), updateDebouncedValues]);
+  useEffect(() => {
+    const subscription = customForm.watch((values) => {
+      updateDebouncedValues(values.api_base || "", values.api_key || "");
+    });
+    return () => subscription.unsubscribe();
+  }, [customForm, updateDebouncedValues]);

441-449: JSX/SVG prop: use fillRule instead of fill-rule

React’s JSX expects camelCase SVG attributes. Using fill-rule will not apply as intended.

-                    fill-rule="evenodd"
+                    fillRule="evenodd"
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)

95-102: Prevent model selection when clicking “Show in Finder”

Clicking the button bubbles and triggers the parent card’s onClick, unintentionally selecting the model and restarting the server. Stop propagation.

-                    <Button
+                    <Button
                       size="sm"
                       variant="outline"
-                      onClick={handleShowFileLocation}
+                      onClick={(e) => {
+                        e.stopPropagation();
+                        handleShowFileLocation();
+                      }}
                       className="text-xs h-7 px-2 flex items-center gap-1"
                     >
🧹 Nitpick comments (24)
crates/am/src/model.rs (3)

47-53: MB vs MiB mismatch (keys say MB; code uses 1024-based bytes).

Keys embed “MB”, but sizes use 1024-based computation (MiB). Either switch to decimal (1_000_000) or clarify naming to “MiB” to avoid confusion. Functionally fine; this is a clarity nit.


1-6: Derive Clone/Copy/Eq to improve ergonomics; ModelInfo should derive Clone.

Public API will be easier to use and compare with these derives; Model can be Copy; ModelInfo cannot.

-#[derive(Debug, serde::Serialize, serde::Deserialize, specta::Type)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, serde::Serialize, serde::Deserialize, specta::Type)]
 pub enum Model {
     ParakeetV2,
     WhisperLargeV3,
     WhisperSmallEn,
 }

-#[derive(Debug, serde::Serialize, serde::Deserialize, specta::Type)]
+#[derive(Debug, Clone, serde::Serialize, serde::Deserialize, specta::Type)]
 pub struct ModelInfo {
     pub key: String,
     pub name: String,
     pub size_bytes: u64,
 }

Also applies to: 8-13


15-22: Consider From for ModelInfo for idiomatic conversion.

Minor ergonomics: implement From (or From<&Model>) for ModelInfo so callers can do model.into().

Example (outside selected lines):

impl From<Model> for ModelInfo {
    fn from(m: Model) -> Self {
        m.info()
    }
}
// or From<&Model> if you prefer borrowing
crates/am/Cargo.toml (1)

14-14: Consider gating specta under a feature to reduce build surface.

If TS bindings aren’t always needed, make specta optional and gate derives via cfg_attr. Keeps am lean for non-TS consumers.

Cargo.toml:

-[dependencies]
+[features]
+ts-bindings = ["dep:specta"]
+
+[dependencies]
 reqwest = { workspace = true, features = ["json"] }
 serde = { workspace = true, features = ["derive"] }
 serde_json = { workspace = true }
-specta = { workspace = true, features = ["derive"] }
+specta = { workspace = true, features = ["derive"], optional = true }
 thiserror = { workspace = true }

And in Rust types (example):

#[cfg_attr(feature = "ts-bindings", derive(specta::Type))]
crates/am/src/lib.rs (1)

8-8: Avoid glob re-export; re-export explicitly to keep public API tight.

Prevents accidental export of future items.

-pub use model::*;
+pub use model::{Model, ModelInfo};
apps/desktop/src/components/toast/model-download.tsx (3)

94-101: Avoid premature toast when existence checks haven’t loaded

If sttModelExists.data or llmModelExists.data is undefined, the code treats them as “missing,” which can show the toast unnecessarily. Early-return until those queries resolve.

   useEffect(() => {
     if (!checkForModelDownload.data) {
       return;
     }
+    if (sttModelExists.isLoading || llmModelExists.isLoading) {
+      return;
+    }

127-131: Make the button label model-agnostic

Multiple LLMs are now supported; avoid hardcoding HyprLLM in the CTA.

-      buttonLabel = "Download HyprLLM v1";
+      buttonLabel = "Download Model";

26-29: Key stability: include model in query keys to refresh when selection changes

When current model changes, these queries should refetch based on keys.

   const checkForModelDownload = useQuery({
     enabled: !!(currentSttModel.data && currentLlmModel.data),
-    queryKey: ["check-model-downloaded"],
+    queryKey: ["check-model-downloaded", currentSttModel.data, currentLlmModel.data],
   ...

   const sttModelDownloading = useQuery({
-    queryKey: ["stt-model-downloading"],
+    queryKey: ["stt-model-downloading", currentSttModel.data],
   ...

   const llmModelDownloading = useQuery({
-    queryKey: ["llm-model-downloading"],
+    queryKey: ["llm-model-downloading", currentLlmModel.data],

Also applies to: 44-47, 53-56

apps/desktop/src/components/toast/model-select.tsx (1)

45-46: Align CTA text with the new tab

Update the label to reflect the transcription-specific destination.

-          Open AI Settings
+          Open Transcription Settings
plugins/windows/src/ext.rs (1)

314-317: Non-resizable Settings window — verify UX on small/HiDPI displays

Locking resize can hinder readability/accessibility on scaled or low-res screens. Ensure content scrolls responsively and no controls become unreachable.

If constraints are needed, consider keeping a min size and allowing resize for flexibility.

apps/desktop/src/locales/ko/messages.po (1)

742-744: Untranslated visible label in KO locale

"Enable Integration" is still English in msgstr. If this surfaces in UI, consider translating (e.g., "통합 사용").

apps/desktop/src/components/welcome-modal/model-selection-view.tsx (1)

14-17: Tighten ModelInfo typing to avoid casts

ModelInfo.model is typed as string, causing casts to WhisperModel. Prefer WhisperModel throughout.

Apply:

-interface ModelInfo {
-  model: string;
-  is_downloaded: boolean;
-}
+interface ModelInfo {
+  model: WhisperModel;
+  is_downloaded: boolean;
+}
@@
-  const supportedSTTModels = useQuery<ModelInfo[]>({
+  const supportedSTTModels = useQuery<ModelInfo[]>({
@@
-      return models.map((model, index) => ({
+      return models.map((model, index) => ({
         model,
         is_downloaded: downloadedModels[index],
       }));
@@
-                      onClick={() => setSelectedModel(model as WhisperModel)}
+                      onClick={() => setSelectedModel(model)}

This removes as-casts and strengthens type safety.

Also applies to: 52-64, 83-103

apps/desktop/src/components/welcome-modal/download-progress-view.tsx (2)

207-208: Drop “what” style comment per guideline

The JSX comment narrates “what” rather than “why”. Remove or rephrase; simplest is to remove.

-      {/* Replace static text with animated messages */}
+

109-145: Add cleanup for Channel.onmessage to prevent stale handlers

Avoid potential leaks or duplicate handlers if the effect re-runs.

   useEffect(() => {
@@
-    if (llmSelection === "hyprllm") {
+    if (llmSelection === "hyprllm") {
       llmDownload.channel.onmessage = (progress) => {
@@
-    }
-  }, [selectedSttModel, sttDownload.channel, llmDownload.channel, llmSelection]);
+    }
+    return () => {
+      sttDownload.channel.onmessage = undefined as unknown as (v: number) => void;
+      if (llmSelection === "hyprllm") {
+        llmDownload.channel.onmessage = undefined as unknown as (v: number) => void;
+      }
+    };
+  }, [selectedSttModel, sttDownload.channel, llmDownload.channel, llmSelection]);

Note: adjust typings if Channel exposes a typed remove/close API; use it instead of undefined if available.

apps/desktop/src/components/settings/components/types.ts (1)

29-31: Use a distinct icon for ai-stt to match STT semantics and tab-icon mapping

To keep iconography consistent with the new split and tab-icon.tsx, use BirdIcon for "ai-stt".

Apply this diff within the TABS array:

-  { name: "ai-llm", icon: Sparkles },
-  { name: "ai-stt", icon: Sparkles },
+  { name: "ai-llm", icon: Sparkles },
+  { name: "ai-stt", icon: BirdIcon },

And add the missing import (outside the changed range):

import {
  Bell,
  BlocksIcon,
  Calendar,
  CreditCard,
  LayoutTemplate,
  MessageSquare,
  Settings,
  Sparkles,
  Volume2,
  BirdIcon, // add this
} from "lucide-react";
plugins/local-stt/src/commands.rs (2)

37-45: Pro models list OK; consider centralizing to avoid drift.

Hardcoding pro models here risks divergence from the am crate. If feasible, expose a curated list (e.g., hypr_am::Model::pro_models()) and map to .info() to keep a single source of truth.


118-124: Inspect TS binding for getServers: Partial<Record<ServerType, string | null>>
Our generated binding in plugins/local-stt/js/bindings.gen.ts is:

async getServers(): Promise<Partial<{ [key in ServerType]: string | null }>>;

That means any ServerType mapped to None in Rust will be omitted (undefined) in JS, and property order follows insertion. If your UI or tests expect:

  • All enum cases to be present (even when null), or
  • A consistent iteration order over ServerType keys

please review and choose one of the following:

• In Rust (plugins/local-stt/src/commands.rs), switch from HashMap<…> to std::collections::BTreeMap<…> for deterministic key order, and append

-    app.get_servers().await.map_err(|e| e.to_string())
+    app.get_servers().await.map_err(|e| e.to_string())
+        .map(|m| m.into_iter().collect())

so Specta still generates a Record but in sorted order.

• In JS/TS (stt-view-local.tsx or wherever you consume servers.data), post-process the partial object to:
– Inject missing keys with null
– Build a sorted array via Object.keys(…).sort() before mapping/rendering

Ensure your component code handles servers.data[key] being undefined versus null.

plugins/local-llm/src/model.rs (2)

24-42: Lookup tables updated correctly.

file_name, model_url, model_size entries for Gemma3_4bQ4 are consistent. If you have a checksum/size source of truth, consider adding a unit test that validates these constants.


44-50: Checksum API addition is fine.

Values are present for all variants. Consider doc-commenting how to regenerate checksums to avoid drift.

plugins/local-stt/js/bindings.gen.ts (1)

64-64: Consider ModelInfo type consistency across plugins.

The ModelInfo type here differs from the local-llm version (plugins/local-llm/js/bindings.gen.ts, Line 57) - local-llm includes a description field and uses SupportedModel for the key, while local-stt uses a plain string for the key.

Consider standardizing the ModelInfo interface across plugins if they represent similar concepts, or use different type names to avoid confusion.

apps/desktop/src/components/settings/views/ai-stt.tsx (2)

16-16: Consider removing unused state.

The isWerModalOpen state and setIsWerModalOpen are defined but may not be used effectively. Verify if this state is needed or if it's leftover from refactoring.

Also applies to: 50-51


80-130: Consider centralizing model definitions.

The initialSttModels array contains hardcoded model data that appears to be duplicated across components. Consider moving this to a shared constants file or configuration module to avoid duplication and ensure consistency.

apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx (1)

180-182: Remove stray console logs

Leftover debug logs will spam console in production. Remove them or guard behind a dev flag.

-      console.log("onquery");
-      console.log(url.toString());
+      // Why: avoid noisy logs in production
apps/desktop/src/components/settings/views/ai-llm.tsx (1)

383-391: Auto-switch to Remote tab when Custom LLM is enabled

Currently, the accordion selection changes but the active tab may remain “Local,” hiding the Remote configuration. Switch tabs for clearer UX.

   useEffect(() => {
     if (providerSourceQuery.data) {
       setOpenAccordion(providerSourceQuery.data as "openai" | "gemini" | "openrouter" | "others");
     } else if (customLLMEnabled.data) {
-      setOpenAccordion("others");
+      setOpenAccordion("others");
+      setActiveTab("remote");
     } else {
       setOpenAccordion(null);
     }
   }, [providerSourceQuery.data, customLLMEnabled.data, setOpenAccordion]);
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 090a014 and cbf764d.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (47)
  • apps/desktop/src/components/license.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (3 hunks)
  • apps/desktop/src/components/settings/components/ai/shared.tsx (2 hunks)
  • apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx (0 hunks)
  • apps/desktop/src/components/settings/components/index.ts (0 hunks)
  • apps/desktop/src/components/settings/components/tab-icon.tsx (2 hunks)
  • apps/desktop/src/components/settings/components/types.ts (2 hunks)
  • apps/desktop/src/components/settings/components/wer-modal.tsx (0 hunks)
  • apps/desktop/src/components/settings/views/ai-llm.tsx (6 hunks)
  • apps/desktop/src/components/settings/views/ai-stt.tsx (1 hunks)
  • apps/desktop/src/components/settings/views/index.ts (1 hunks)
  • apps/desktop/src/components/toast/model-download.tsx (1 hunks)
  • apps/desktop/src/components/toast/model-select.tsx (1 hunks)
  • apps/desktop/src/components/welcome-modal/download-progress-view.tsx (1 hunks)
  • apps/desktop/src/components/welcome-modal/model-selection-view.tsx (1 hunks)
  • apps/desktop/src/hooks/use-license.ts (5 hunks)
  • apps/desktop/src/locales/en/messages.po (35 hunks)
  • apps/desktop/src/locales/ko/messages.po (35 hunks)
  • apps/desktop/src/routes/app.settings.tsx (3 hunks)
  • crates/am/Cargo.toml (1 hunks)
  • crates/am/src/lib.rs (1 hunks)
  • crates/am/src/model.rs (1 hunks)
  • plugins/listener/src/fsm.rs (3 hunks)
  • plugins/local-llm/build.rs (1 hunks)
  • plugins/local-llm/js/bindings.gen.ts (2 hunks)
  • plugins/local-llm/permissions/autogenerated/commands/list_supported_model.toml (1 hunks)
  • plugins/local-llm/permissions/autogenerated/reference.md (2 hunks)
  • plugins/local-llm/permissions/default.toml (1 hunks)
  • plugins/local-llm/permissions/schemas/schema.json (2 hunks)
  • plugins/local-llm/src/commands.rs (2 hunks)
  • plugins/local-llm/src/lib.rs (1 hunks)
  • plugins/local-llm/src/model.rs (1 hunks)
  • plugins/local-stt/build.rs (1 hunks)
  • plugins/local-stt/js/bindings.gen.ts (2 hunks)
  • plugins/local-stt/permissions/autogenerated/commands/get_servers.toml (1 hunks)
  • plugins/local-stt/permissions/autogenerated/commands/list_pro_models.toml (1 hunks)
  • plugins/local-stt/permissions/autogenerated/reference.md (3 hunks)
  • plugins/local-stt/permissions/default.toml (1 hunks)
  • plugins/local-stt/permissions/schemas/schema.json (3 hunks)
  • plugins/local-stt/src/commands.rs (3 hunks)
  • plugins/local-stt/src/ext.rs (3 hunks)
  • plugins/local-stt/src/lib.rs (1 hunks)
  • plugins/local-stt/src/server/mod.rs (1 hunks)
  • plugins/windows/src/ext.rs (1 hunks)
💤 Files with no reviewable changes (3)
  • apps/desktop/src/components/settings/components/index.ts
  • apps/desktop/src/components/settings/components/wer-modal.tsx
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}

⚙️ CodeRabbit Configuration File

**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".

Files:

  • plugins/windows/src/ext.rs
  • apps/desktop/src/components/welcome-modal/model-selection-view.tsx
  • apps/desktop/src/components/welcome-modal/download-progress-view.tsx
  • crates/am/src/model.rs
  • apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx
  • apps/desktop/src/components/toast/model-select.tsx
  • apps/desktop/src/components/settings/components/tab-icon.tsx
  • apps/desktop/src/components/settings/views/ai-stt.tsx
  • apps/desktop/src/components/settings/views/index.ts
  • plugins/local-stt/build.rs
  • plugins/local-llm/build.rs
  • plugins/local-stt/src/server/mod.rs
  • apps/desktop/src/components/license.tsx
  • plugins/local-llm/src/lib.rs
  • apps/desktop/src/components/toast/model-download.tsx
  • apps/desktop/src/components/settings/components/ai/stt-view-local.tsx
  • apps/desktop/src/components/settings/components/types.ts
  • plugins/local-stt/js/bindings.gen.ts
  • plugins/local-stt/src/ext.rs
  • plugins/listener/src/fsm.rs
  • apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx
  • plugins/local-llm/js/bindings.gen.ts
  • crates/am/src/lib.rs
  • plugins/local-llm/src/commands.rs
  • plugins/local-stt/src/lib.rs
  • plugins/local-stt/src/commands.rs
  • apps/desktop/src/routes/app.settings.tsx
  • apps/desktop/src/components/settings/components/ai/shared.tsx
  • plugins/local-llm/src/model.rs
  • apps/desktop/src/hooks/use-license.ts
  • apps/desktop/src/components/settings/components/ai/llm-local-view.tsx
  • apps/desktop/src/components/settings/views/ai-llm.tsx
🧬 Code Graph Analysis (16)
crates/am/src/model.rs (1)
plugins/local-stt/js/bindings.gen.ts (1)
  • ModelInfo (64-64)
apps/desktop/src/components/settings/views/ai-stt.tsx (5)
apps/desktop/src/components/toast/shared.tsx (1)
  • showSttModelDownloadToast (55-91)
apps/desktop/src/components/settings/components/ai/shared.tsx (2)
  • SharedSTTProps (91-98)
  • STTModel (55-61)
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (1)
  • STTViewLocal (107-208)
apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx (1)
  • STTViewRemote (3-41)
apps/desktop/src/components/settings/views/ai.tsx (1)
  • LocalAI (233-909)
apps/desktop/src/components/license.tsx (1)
apps/desktop/src/hooks/use-license.ts (1)
  • useLicense (8-102)
plugins/local-llm/src/lib.rs (2)
plugins/local-llm/js/bindings.gen.ts (1)
  • commands (9-46)
plugins/local-llm/src/commands.rs (1)
  • list_supported_model (13-34)
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (4)
plugins/local-stt/js/bindings.gen.ts (1)
  • WhisperModel (68-68)
apps/desktop/src/components/settings/components/ai/shared.tsx (2)
  • SharedSTTProps (91-98)
  • STTModel (55-61)
packages/ui/src/lib/utils.ts (1)
  • cn (4-6)
packages/ui/src/components/ui/button.tsx (1)
  • Button (37-89)
plugins/local-stt/js/bindings.gen.ts (1)
plugins/local-llm/js/bindings.gen.ts (1)
  • ModelInfo (58-58)
plugins/local-stt/src/ext.rs (5)
plugins/local-stt/src/commands.rs (1)
  • get_servers (120-124)
plugins/local-stt/js/bindings.gen.ts (1)
  • ServerType (66-66)
apps/desktop/src-tauri/src/ext.rs (1)
  • state (129-129)
plugins/connector/src/ext.rs (1)
  • state (96-96)
plugins/local-llm/src/ext.rs (6)
  • state (47-47)
  • state (54-54)
  • state (83-83)
  • state (118-118)
  • state (170-170)
  • state (189-189)
plugins/listener/src/fsm.rs (4)
crates/whisper-local/src/model.rs (2)
  • languages (31-34)
  • static_prompt (36-39)
owhisper/owhisper-client/src/lib.rs (1)
  • api_key (42-45)
crates/whisper-cloud/src/client.rs (1)
  • api_key (28-31)
crates/pyannote-cloud/src/lib.rs (1)
  • api_key (22-25)
plugins/local-llm/js/bindings.gen.ts (1)
plugins/local-stt/js/bindings.gen.ts (1)
  • ModelInfo (64-64)
plugins/local-llm/src/commands.rs (2)
plugins/local-llm/js/bindings.gen.ts (2)
  • ModelInfo (58-58)
  • SupportedModel (59-59)
plugins/local-llm/src/model.rs (1)
  • model_size (36-42)
plugins/local-stt/src/lib.rs (3)
plugins/local-stt/js/bindings.gen.ts (1)
  • commands (9-46)
plugins/local-stt/src/commands.rs (4)
  • get_servers (120-124)
  • start_server (98-105)
  • stop_server (109-116)
  • list_pro_models (39-45)
plugins/local-stt/src/ext.rs (6)
  • get_servers (29-31)
  • get_servers (209-225)
  • start_server (21-24)
  • start_server (106-172)
  • stop_server (25-28)
  • stop_server (175-206)
plugins/local-stt/src/commands.rs (3)
plugins/local-stt/js/bindings.gen.ts (2)
  • ServerType (66-66)
  • ModelInfo (64-64)
crates/am/src/model.rs (1)
  • info (16-22)
plugins/local-stt/src/ext.rs (2)
  • get_servers (29-31)
  • get_servers (209-225)
apps/desktop/src/components/settings/components/ai/shared.tsx (1)
plugins/local-llm/js/bindings.gen.ts (1)
  • SupportedModel (59-59)
plugins/local-llm/src/model.rs (2)
plugins/local-stt/js/bindings.gen.ts (1)
  • ModelInfo (64-64)
plugins/local-llm/js/bindings.gen.ts (2)
  • ModelInfo (58-58)
  • SupportedModel (59-59)
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (2)
apps/desktop/src/components/settings/components/ai/shared.tsx (1)
  • LLMModel (46-53)
plugins/local-llm/js/bindings.gen.ts (1)
  • SupportedModel (59-59)
apps/desktop/src/components/settings/views/ai-llm.tsx (4)
apps/desktop/src/components/settings/components/ai/shared.tsx (1)
  • LLMModel (46-53)
plugins/local-llm/js/bindings.gen.ts (1)
  • SupportedModel (59-59)
packages/ui/src/components/ui/tabs.tsx (3)
  • TabsList (53-53)
  • TabsTrigger (53-53)
  • Tabs (53-53)
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)
  • LLMLocalView (11-136)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: cubic · AI code reviewer
  • GitHub Check: ci (macos, macos-latest)
  • GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (56)
crates/am/src/lib.rs (1)

3-3: Module addition LGTM.

Clear separation of model concerns is good.

apps/desktop/src/hooks/use-license.ts (1)

20-21: Sanity check passed: timing constants and React Query API verified

  • No remaining shouldRefresh() calls found.
  • All packages use @tanstack/react-query@^5.84.0, which provides the isPending mutation state.
  • refetchInterval: 5m + refetchIntervalInBackground: true align with the 7 d TTL, 3 d refresh threshold, 30 m provider check, and 1 h rate limit.
apps/desktop/src/components/toast/model-download.tsx (1)

84-87: Gemma3_4bQ4 included in LLM existence check — good addition

This aligns existence detection with the new supported model. No further changes needed here.

apps/desktop/src/components/toast/model-select.tsx (1)

17-17: Navigation target updated to ai-stt — aligned with tab split

The new target matches the “Transcription” tab. Looks correct.

plugins/local-stt/src/lib.rs (1)

39-43: Specta bindings updated for get_servers and list_pro_models — good

Matches the new commands and enables TS bindings to consume them.

apps/desktop/src/components/settings/components/tab-icon.tsx (1)

4-4: New AI tab icons wired correctly

  • ai-llm → SparklesIcon
  • ai-stt → BirdIcon
    No unused imports; mapping is consistent.

Also applies to: 26-30

plugins/local-stt/src/server/mod.rs (1)

4-6: Derives expanded for ServerType — necessary for map keys and ergonomics

This unblocks get_servers returning HashMap<ServerType, _>.

plugins/listener/src/fsm.rs (1)

568-569: Intentional unused parameter marker is fine

The _jargons naming cleanly silences warnings. Leaving this here to note the intent.

plugins/local-llm/permissions/schemas/schema.json (1)

381-392: Permission schema extended for list_supported_model — consistent with API changes

New allow/deny kinds and default set update are coherent with the ModelInfo-based flow.

Also applies to: 454-458

plugins/local-stt/permissions/schemas/schema.json (3)

478-482: Default permission set documentation updated correctly

The default set now enumerates allow-get-servers and allow-list-pro-models. Consistent with added kinds.


321-332: Permission Kinds Verified for get_servers

  • plugins/local-stt/permissions/autogenerated/commands/get_servers.toml exists
  • plugins/local-stt/permissions/default.toml includes "allow-get-servers"
  • plugins/local-stt/permissions/autogenerated/reference.md includes local-stt:deny-get-servers
  • Code references to get_servers in src/ align with the command name

No issues found. LGTM.


393-404: Verified list_pro_models permissions integration

All relevant files and references are present and correctly named:

  • plugins/local-stt/permissions/autogenerated/commands/list_pro_models.toml exists
  • Default permissions include "allow-list-pro-models" in plugins/local-stt/permissions/default.toml
  • Code references commands::list_pro_models in plugins/local-stt/src/lib.rs and implements list_pro_models in plugins/local-stt/src/commands.rs

No further changes required.

apps/desktop/src/locales/ko/messages.po (2)

267-275: Placeholder change acknowledged

Template placeholder msgid "{0}" remains intact; no i18n interpolation risks introduced. LGTM.


990-994: Please confirm intent for Korean translations in apps/desktop/src/locales/ko/messages.po

Our check shows 1,679 entries with empty msgstr in the Korean locale, including the newly added keys:

  • msgid "Local"
  • msgid "Remote"
  • msgid "Summarization"
  • msgid "Transcription"

If you intend to ship with English fallbacks, please annotate or mark these untranslated entries accordingly. Otherwise, add the appropriate Korean translations (or flag them for later) before merging.

apps/desktop/src/components/welcome-modal/model-selection-view.tsx (1)

12-12: Import source switch to stt-view-local is correct

sttModelMetadata is now centralized under the local STT view; usage remains unchanged. Good move.

plugins/local-stt/build.rs (1)

9-9: Registering get_servers and list_pro_models

Additions are consistent with the rest of the stack (commands.rs, bindings, schema). No further changes needed here.

Also applies to: 13-13

apps/desktop/src/components/welcome-modal/download-progress-view.tsx (1)

11-11: Import source switch to stt-view-local is correct

Centralizes STT metadata under the Local view; usage unchanged. OK.

plugins/local-stt/permissions/default.toml (1)

10-10: Add default permissions for new commands — LGTM

allow-get-servers and allow-list-pro-models align with the new command surfaces and schema updates.

Also applies to: 14-14

plugins/local-llm/build.rs (1)

13-13: Command registry updated — LGTM

list_supported_model correctly reflects the renamed API and enables Specta/bindings generation.

plugins/local-llm/src/lib.rs (1)

40-41: Specta registry uses list_supported_model — LGTM

Matches the commands.rs rename and updated return type. No further changes needed here.

plugins/local-llm/permissions/default.toml (1)

15-16: Summary mismatch: both downloaded and supported list permissions are present

The AI summary states “replaced” allow-list-downloaded-model, but the file keeps it and adds allow-list-supported-model. This looks correct given both commands exist.

apps/desktop/src/components/settings/views/index.ts (1)

1-2: All LocalAI references have been removed
A repository-wide search for “LocalAI” returned no matches, confirming that all imports and usages have been updated to AILLM/AISTT.

plugins/local-stt/permissions/autogenerated/commands/get_servers.toml (1)

1-13: Autogenerated permissions manifest for get_servers — LGTM

The allow/deny entries and schema reference are consistent with the existing pattern.

apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx (1)

3-41: Remote STT placeholder component — LGTM

Clean, minimal, and consistent with the new split views.

plugins/local-stt/src/commands.rs (2)

1-2: Imports are correct and used.

HashMap is used by get_servers, Channel by download_model. No unused imports.


4-4: Correct dependency usage.

ServerType and LocalSttPluginExt are required for new commands; alignment matches plugins/local-stt/src/ext.rs.

apps/desktop/src/routes/app.settings.tsx (3)

11-13: Split views imported correctly.

AILLM/AISTT imports align with the tab split. No unused imports.


134-135: Conditional rendering aligns with new tabs.

Placement and order look fine and match titles.


53-56: Fix inconsistent icon for “ai-stt” in TABS mapping

The tab names are correctly added, but the icon for “ai-stt” in the TABS array is still Sparkles. In TabIcon, “ai-stt” renders a BirdIcon, so these should be aligned to avoid mismatches at runtime.

• apps/desktop/src/components/settings/components/types.ts

@@ export const TABS: { name: Tab; icon: LucideIcon }[] = [
  { name: "ai-llm", icon: Sparkles },
- { name: "ai-stt", icon: Sparkles },
+ { name: "ai-stt", icon: BirdIcon },
]

– Don’t forget to import BirdIcon from lucide-react alongside the other icons.

Likely an incorrect or invalid review comment.

plugins/local-llm/src/model.rs (1)

4-10: ModelInfo addition is consistent with bindings.

Fields and derives align with TS type and specta. Looks good.

plugins/local-llm/js/bindings.gen.ts (2)

58-59: Types align with Rust.

ModelInfo and the extended SupportedModel (Gemma3_4bQ4) match the Rust-side changes.


13-15: No outdated LLM command usages; renaming is safe
All instances of listSupportedModels() belong to the local-stt plugin and remain unchanged. The local-llm code only calls the new listSupportedModel(), so there are no stale callsites to update.

apps/desktop/src/components/settings/components/ai/shared.tsx (2)

2-2: Type import is correct and tree-shakable.

Using a type-only import avoids runtime cost. Good.


6-7: cn import used.

No unused imports; consistent with usage in RatingDisplay.

plugins/local-stt/permissions/autogenerated/commands/list_pro_models.toml (1)

1-14: LGTM! Autogenerated permissions file follows expected pattern.

The permission file structure is correct with proper schema reference and standard allow/deny permission pairs for the list_pro_models command.

plugins/local-stt/js/bindings.gen.ts (2)

34-36: LGTM! New getServers command properly typed.

The return type correctly maps ServerType to optional API base URLs.


43-45: LGTM! New listProModels command implementation.

Command properly invokes the backend and returns ModelInfo array.

plugins/local-llm/permissions/autogenerated/reference.md (2)

18-18: LGTM! Default permission correctly updated.

The new allow-list-supported-model permission is properly added to the default permission set.


214-235: LGTM! Permission table entries properly added.

The allow and deny permission entries for list_supported_model command are correctly documented in the table format.

plugins/local-stt/src/ext.rs (3)

1-1: LGTM! HashMap import added for new functionality.

The import is necessary for the new get_servers method return type.


29-31: LGTM! Trait method signature is well-defined.

The get_servers method properly returns a HashMap mapping ServerType to optional API base URLs, which aligns with the expected use case.


208-225: LGTM! Implementation follows established patterns.

The implementation properly:

  • Uses tracing instrumentation consistently with other methods
  • Locks shared state safely
  • Extracts API base URLs from both server types
  • Uses a clean array-to-HashMap conversion pattern
  • Handles the Option wrapping correctly
plugins/local-stt/permissions/autogenerated/reference.md (2)

13-13: LGTM! Default permissions properly updated.

The new allow-get-servers and allow-list-pro-models permissions are correctly added to the default permission set.

Also applies to: 17-17


82-104: LGTM! Permission table entries correctly documented.

Both get_servers and list_pro_models commands have their allow/deny permission entries properly documented in the table format.

Also applies to: 239-260

plugins/local-llm/permissions/autogenerated/commands/list_supported_model.toml (1)

1-14: LGTM! Autogenerated permissions file properly structured.

The permission file correctly defines allow/deny permissions for the list_supported_model command with proper schema reference.

apps/desktop/src/components/settings/views/ai-stt.tsx (1)

12-78: LGTM! Component structure and state management are well implemented.

The tabbed interface, state management, and component organization follow React best practices. The translation support and query client usage are properly implemented.

plugins/local-llm/src/commands.rs (2)

1-1: LGTM! Import statement properly updated.

The imports now include the necessary ModelInfo and SupportedModel types for the refactored function.


13-34: LGTM! Function refactoring improves API design.

The changes provide several improvements:

  • Function name is more consistent (singular form)
  • Returns rich metadata via ModelInfo instead of just enum values
  • Provides helpful descriptions for each model
  • Uses proper size calculation via model_size() method

The hardcoded approach is appropriate for a controlled set of supported models.

apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx (1)

154-161: Ignore incorrect suggestion on useDebouncedCallback signature

The useDebouncedCallback hook from beautiful-react-hooks expects arguments as (fn, dependencies?, wait?, options?). Your current usage—passing [] as the dependencies array and 2000 as the debounce delay—is correct. No changes are required.

Likely an incorrect or invalid review comment.

apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)

35-43: Selection flow looks sound

Selecting a downloaded model updates current model, disables custom LLM, and restarts the server. This aligns with the desired UX.

apps/desktop/src/components/settings/views/ai-llm.tsx (2)

122-135: LGTM: Dynamic LLM model list initialization

Listing supported models and normalizing to LLMModel[] with size formatting is clean and straightforward.


186-199: LGTM: Download status typing with satisfies

Using satisfies Record<SupportedModel, boolean> keeps keys aligned with the union and prevents drift.

apps/desktop/src/locales/en/messages.po (1)

990-997: Translations align with new UI

New strings for Local/Remote, Loading available models…, Model Name, and Autonomy Selector match the updated components.

Also applies to: 1248-1252, 970-973, 619-620

apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (3)

158-167: LGTM: Download status propagation

Polling and mapping download status into sttModels via setSttModels is correct and efficient.


304-323: LGTM: Safe selection and server restart flow

Guarding click by model.downloaded and restarting the server ensures consistent state after selection.


225-227: Show in Finder handling is solid

Opening the models directory via modelsDir + openPath and stopping propagation (in download button) prevents unintended selection. Mirrors the recommended LLM fix.

Also applies to: 337-349

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🔭 Outside diff range comments (3)
crates/am/src/lib.rs (1)

15-21: Avoid live-network unit tests; mark ignored and make the assertion meaningful

This test calls a real server on localhost:50060 and always passes with assert!(true). Make it explicit that it requires a running server and avoid noise.

-    #[tokio::test]
-    async fn test_client_creation() {
-        let client = Client::new("http://localhost:50060");
-        let status = client.status().await;
-        println!("{:?}", status);
-        assert!(true);
-    }
+    #[tokio::test]
+    #[ignore = "requires local AM server at http://localhost:50060"]
+    async fn test_client_creation() {
+        let client = Client::new("http://localhost:50060");
+        let status = client.status().await;
+        // Verify the status call succeeds when the server is running.
+        assert!(status.is_ok(), "status endpoint errored: {:?}", status.err());
+    }
apps/desktop/src/hooks/use-license.ts (1)

41-44: Remove local onError handlers; violates “No error handling” guideline and risks logging sensitive data.

Per repo guideline, avoid local error handling; also don’t log raw errors (may contain PII). Rely on react-query statuses and central UI/telemetry instead.

Apply these diffs to drop local handlers:

@@
   return license;
 },
-    onError: (e) => {
-      console.error(e);
-      queryClient.setQueryData(LICENSE_QUERY_KEY, null);
-    },
     onSuccess: (license) => {
       queryClient.setQueryData(LICENSE_QUERY_KEY, license);
     },
   });
@@
   });
-    onError: console.error,
     onSuccess: () => {
       queryClient.invalidateQueries({ queryKey: LICENSE_QUERY_KEY });
     },
   });
@@
   });
-    onError: console.error,
     onSuccess: () => {
       queryClient.setQueryData(LICENSE_QUERY_KEY, null);
     },
   });

Also applies to: 77-77, 91-91

apps/desktop/src/locales/ko/messages.po (1)

16-20: Fill missing Korean translations for new AI settings strings

The KO locale file (apps/desktop/src/locales/ko/messages.po) contains several new or relocated msgid entries with empty msgstr values, which will fall back to English if not translated:

  • "Type terms separated by commas (e.g., Blitz Meeting, PaC Squad)" (src/components/settings/views/general.tsx:362)
  • "{days} day{plural} later" (packages/utils/src/datetime.ts:22)
  • "{hours} hour{plural} later" (packages/utils/src/datetime.ts:27)
  • "{minutes} minutes later" (packages/utils/src/datetime.ts:32)
  • Additional keys (e.g., Local, Remote, Autonomy Selector, provider labels)

Please fill in the Korean translations or mark these entries as fuzzy before release to avoid mixed-language UI.

♻️ Duplicate comments (2)
apps/desktop/src/hooks/use-license.ts (1)

2-2: Solid fix: memoized getLicenseStatus and ms-based expiry window.

Addressed prior feedback: useCallback stabilizes identity; msUntilExpiry removes the off-by-one near expiry. Looks correct.

Also applies to: 50-65

plugins/local-llm/src/model.rs (1)

1-3: Gemma3_4bQ4 correctly added to SUPPORTED_MODELS.

This resolves the prior omission so it appears in discovery flows.

🧹 Nitpick comments (4)
apps/desktop/src/hooks/use-license.ts (1)

60-64: Nit: extract MS_PER_DAY to improve readability and reuse.

@@
-const REFRESH_THRESHOLD_DAYS = 3;
+const REFRESH_THRESHOLD_DAYS = 3;
+const MS_PER_DAY = 24 * 60 * 60 * 1000;
@@
-    return {
-      needsRefresh: msUntilExpiry > 0
-        && msUntilExpiry <= REFRESH_THRESHOLD_DAYS * 24 * 60 * 60 * 1000,
-      isValid: msUntilExpiry > 0,
-    };
+    return {
+      needsRefresh:
+        msUntilExpiry > 0 &&
+        msUntilExpiry <= REFRESH_THRESHOLD_DAYS * MS_PER_DAY,
+      isValid: msUntilExpiry > 0,
+    };
plugins/local-stt/src/server/internal.rs (1)

54-58: Avoid silently ignoring shutdown send result

If there are no receivers, send() returns Err and is silently dropped. Consider low-level logging for visibility.

-        let _ = self.shutdown.send(());
+        if self.shutdown.send(()).is_err() {
+            tracing::debug!("local_stt_internal_terminate_no_receivers");
+        }
plugins/local-stt/src/server/external.rs (1)

21-21: Unused shutdown receiver; either use it or remove it

There’s no consumer of _shutdown_rx, and terminate() kills the child directly. Either wire the receiver to a graceful shutdown path or drop the channel entirely.

-    let (shutdown_tx, _shutdown_rx) = tokio::sync::watch::channel(());
+    let (shutdown_tx, _shutdown_rx) = tokio::sync::watch::channel(()); // Keep only if you plan to use it for graceful shutdown

If not planning to use it, remove the channel and the shutdown field.

plugins/local-stt/src/ext.rs (1)

170-172: Enrich missing-binary error with path context

AmBinaryNotFound carries no context. Consider including which path(s) were missing to ease debugging.

For example, make AmBinaryNotFound(PathBuf) and return the missing path.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cbf764d and 332d7cd.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (51)
  • apps/desktop/src/components/license.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (3 hunks)
  • apps/desktop/src/components/settings/components/ai/shared.tsx (2 hunks)
  • apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx (0 hunks)
  • apps/desktop/src/components/settings/components/index.ts (0 hunks)
  • apps/desktop/src/components/settings/components/tab-icon.tsx (2 hunks)
  • apps/desktop/src/components/settings/components/types.ts (2 hunks)
  • apps/desktop/src/components/settings/components/wer-modal.tsx (0 hunks)
  • apps/desktop/src/components/settings/views/ai-llm.tsx (6 hunks)
  • apps/desktop/src/components/settings/views/ai-stt.tsx (1 hunks)
  • apps/desktop/src/components/settings/views/index.ts (1 hunks)
  • apps/desktop/src/components/toast/model-download.tsx (1 hunks)
  • apps/desktop/src/components/toast/model-select.tsx (1 hunks)
  • apps/desktop/src/components/welcome-modal/download-progress-view.tsx (1 hunks)
  • apps/desktop/src/components/welcome-modal/model-selection-view.tsx (1 hunks)
  • apps/desktop/src/hooks/use-license.ts (5 hunks)
  • apps/desktop/src/locales/en/messages.po (35 hunks)
  • apps/desktop/src/locales/ko/messages.po (35 hunks)
  • apps/desktop/src/routes/app.settings.tsx (3 hunks)
  • crates/am/Cargo.toml (1 hunks)
  • crates/am/src/client.rs (2 hunks)
  • crates/am/src/lib.rs (2 hunks)
  • crates/am/src/model.rs (1 hunks)
  • plugins/listener/src/fsm.rs (3 hunks)
  • plugins/local-llm/build.rs (1 hunks)
  • plugins/local-llm/js/bindings.gen.ts (2 hunks)
  • plugins/local-llm/permissions/autogenerated/commands/list_supported_model.toml (1 hunks)
  • plugins/local-llm/permissions/autogenerated/reference.md (2 hunks)
  • plugins/local-llm/permissions/default.toml (1 hunks)
  • plugins/local-llm/permissions/schemas/schema.json (2 hunks)
  • plugins/local-llm/src/commands.rs (2 hunks)
  • plugins/local-llm/src/lib.rs (1 hunks)
  • plugins/local-llm/src/model.rs (1 hunks)
  • plugins/local-stt/build.rs (1 hunks)
  • plugins/local-stt/js/bindings.gen.ts (2 hunks)
  • plugins/local-stt/permissions/autogenerated/commands/get_servers.toml (1 hunks)
  • plugins/local-stt/permissions/autogenerated/commands/list_pro_models.toml (1 hunks)
  • plugins/local-stt/permissions/autogenerated/reference.md (3 hunks)
  • plugins/local-stt/permissions/default.toml (1 hunks)
  • plugins/local-stt/permissions/schemas/schema.json (3 hunks)
  • plugins/local-stt/src/commands.rs (3 hunks)
  • plugins/local-stt/src/error.rs (1 hunks)
  • plugins/local-stt/src/ext.rs (8 hunks)
  • plugins/local-stt/src/lib.rs (1 hunks)
  • plugins/local-stt/src/server/external.rs (1 hunks)
  • plugins/local-stt/src/server/internal.rs (1 hunks)
  • plugins/local-stt/src/server/mod.rs (1 hunks)
  • plugins/windows/src/ext.rs (1 hunks)
💤 Files with no reviewable changes (3)
  • apps/desktop/src/components/settings/components/index.ts
  • apps/desktop/src/components/settings/components/wer-modal.tsx
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx
🚧 Files skipped from review as they are similar to previous changes (33)
  • plugins/local-stt/permissions/autogenerated/commands/get_servers.toml
  • apps/desktop/src/components/license.tsx
  • crates/am/Cargo.toml
  • plugins/local-stt/src/lib.rs
  • crates/am/src/model.rs
  • plugins/local-llm/src/lib.rs
  • apps/desktop/src/components/toast/model-select.tsx
  • plugins/local-stt/permissions/default.toml
  • apps/desktop/src/components/welcome-modal/model-selection-view.tsx
  • apps/desktop/src/components/welcome-modal/download-progress-view.tsx
  • apps/desktop/src/components/settings/views/ai-stt.tsx
  • plugins/local-stt/build.rs
  • plugins/windows/src/ext.rs
  • apps/desktop/src/components/settings/components/ai/stt-view-remote.tsx
  • plugins/local-llm/permissions/autogenerated/commands/list_supported_model.toml
  • plugins/listener/src/fsm.rs
  • apps/desktop/src/components/toast/model-download.tsx
  • apps/desktop/src/components/settings/components/ai/shared.tsx
  • plugins/local-stt/src/commands.rs
  • plugins/local-stt/js/bindings.gen.ts
  • plugins/local-llm/build.rs
  • plugins/local-stt/permissions/schemas/schema.json
  • plugins/local-llm/permissions/autogenerated/reference.md
  • plugins/local-stt/permissions/autogenerated/commands/list_pro_models.toml
  • apps/desktop/src/components/settings/components/types.ts
  • plugins/local-llm/permissions/default.toml
  • apps/desktop/src/routes/app.settings.tsx
  • apps/desktop/src/components/settings/components/ai/stt-view-local.tsx
  • apps/desktop/src/components/settings/components/tab-icon.tsx
  • plugins/local-stt/permissions/autogenerated/reference.md
  • plugins/local-llm/js/bindings.gen.ts
  • apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx
  • apps/desktop/src/components/settings/components/ai/llm-local-view.tsx
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}

⚙️ CodeRabbit Configuration File

**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".

Files:

  • crates/am/src/client.rs
  • plugins/local-stt/src/error.rs
  • apps/desktop/src/components/settings/views/index.ts
  • plugins/local-stt/src/server/internal.rs
  • crates/am/src/lib.rs
  • plugins/local-stt/src/server/mod.rs
  • plugins/local-stt/src/server/external.rs
  • plugins/local-stt/src/ext.rs
  • apps/desktop/src/hooks/use-license.ts
  • plugins/local-llm/src/model.rs
  • apps/desktop/src/components/settings/views/ai-llm.tsx
  • plugins/local-llm/src/commands.rs
🧬 Code Graph Analysis (7)
plugins/local-stt/src/server/internal.rs (1)
plugins/local-stt/src/server/external.rs (1)
  • terminate (8-12)
crates/am/src/lib.rs (1)
crates/am/src/client.rs (2)
  • new (13-18)
  • new (128-143)
plugins/local-stt/src/server/external.rs (1)
plugins/local-stt/src/server/internal.rs (2)
  • terminate (55-58)
  • run_server (61-88)
plugins/local-stt/src/ext.rs (2)
plugins/local-stt/src/commands.rs (1)
  • get_servers (120-124)
crates/am/src/client.rs (2)
  • new (13-18)
  • new (128-143)
plugins/local-llm/src/model.rs (3)
plugins/local-llm/js/bindings.gen.ts (2)
  • SupportedModel (59-59)
  • ModelInfo (58-58)
plugins/local-stt/js/bindings.gen.ts (1)
  • ModelInfo (64-64)
crates/am/src/model.rs (1)
  • model_size (47-53)
apps/desktop/src/components/settings/views/ai-llm.tsx (4)
apps/desktop/src/components/settings/components/ai/shared.tsx (1)
  • LLMModel (46-53)
plugins/local-llm/js/bindings.gen.ts (1)
  • SupportedModel (59-59)
packages/ui/src/components/ui/tabs.tsx (2)
  • TabsList (53-53)
  • Tabs (53-53)
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)
  • LLMLocalView (11-136)
plugins/local-llm/src/commands.rs (2)
plugins/local-llm/js/bindings.gen.ts (2)
  • ModelInfo (58-58)
  • SupportedModel (59-59)
plugins/local-llm/src/model.rs (1)
  • model_size (36-42)
🪛 GitHub Actions: .github/workflows/fmt.yaml
plugins/local-llm/src/model.rs

[error] 1-8: dprint check failed. Found 1 not formatted file. Command: ~/.dprint/bin/dprint check --config 'dprint.json'.

🔇 Additional comments (28)
crates/am/src/lib.rs (1)

3-9: Re-exporting model module looks consistent

Adding mod model; and pub use model::*; aligns with existing export style (client, error, types). No unused items introduced.

crates/am/src/client.rs (4)

7-10: Client rename: struct definition LGTM

Struct visibility, fields, and Clone derive remain correct. No unused imports/fields detected.


12-26: Client impl: no functional changes detected

Constructor signatures and behavior unchanged aside from the type name. Consistent with prior API.


201-205: Default impl: LGTM

Keeps the same default base URL; consistent and clear.


7-205: No remaining AmClient references detected
I searched the entire repository (including all lib.rs files) for AmClient and found no occurrences.
All references have been updated or aliased appropriately.

apps/desktop/src/hooks/use-license.ts (3)

6-7: Good centralization of TTL across activate/refresh.

Consistent LICENSE_TTL_SECONDS in both validate paths reduces drift.

Also applies to: 35-36, 72-73


101-101: All shouldRefresh references have been removed
Ran rg -n --hidden --no-ignore -S "shouldRefresh\(|shouldRefresh\b", and no matches were found. The API rename to getLicenseStatus is fully applied across the codebase.


21-23: No redundant polling vs. manual refresh
The 5-minute getLicense query only reads the local license state via keygen.getLicense, while the LicenseRefreshProvider kicks off a network-backed refreshLicense call on a 30-minute interval (with a 1-hour rate limit) only when the license is invalid or nearing expiry. These are distinct, non-overlapping flows and will not duplicate work.

plugins/local-stt/src/server/mod.rs (1)

4-6: Deriving Copy/Hash is appropriate for map keys and lightweight enums

Good call deriving Clone, Copy, PartialEq, Eq, and Hash. This makes ServerType usable as a HashMap key and cheap to pass by value. No concerns.

plugins/local-stt/src/error.rs (1)

21-26: No stale BinaryNotFound references remain; all call sites have been updated to use AmBinaryNotFound, AmApiKeyNotSet, and ServerAlreadyRunning.

  • Verified that BinaryNotFound no longer appears anywhere in the codebase.
  • Error variants in plugins/local-stt/src/error.rs and all crate::Error::… usages in plugins/local-stt/src/ext.rs have been updated.
plugins/local-stt/src/server/internal.rs (1)

51-52: Make shutdown private: good encapsulation

Hiding the shutdown sender prevents misuse and pushes callers toward terminate(). Looks good.

plugins/local-stt/src/server/external.rs (2)

18-18: Spawn signature change LGTM

Using the returned child (instead of discarding) aligns with the new lifecycle. Good.


25-27: Handle construction looks consistent

api_base + shutdown + child compose a minimal, self-sufficient handle. No issues.

plugins/local-stt/src/ext.rs (4)

1-1: HashMap import is correct for the new API

Matches the new get_servers surface.


29-31: New get_servers API is a good addition

Public extension returning endpoints per server type is useful for discovery and aligns with ServerType’s new Hash/Eq traits.


156-161: AM key retrieval and error surfacing look good

ok_or(AmApiKeyNotSet) keeps the error path explicit. No issues.


247-264: get_servers implementation is clean and efficient

Leverages new ServerType traits and returns a compact 2-entry map. LGTM.

plugins/local-llm/permissions/schemas/schema.json (2)

381-392: Confirmed: allow-list-supported-model permission is fully integrated

Everything is in place for the new list_supported_model command:

  • plugins/local-llm/permissions/default.toml includes "allow-list-supported-model" in the default permission set.
  • JSON schema declares both allow-list-supported-model and deny-list-supported-model with matching descriptions.
  • Autogenerated reference.md lists the new permissions under local-llm:allow-list-supported-model and local-llm:deny-list-supported-model.
  • Autogenerated commands TOML defines the allow/deny rules for list_supported_model.

All default configurations and docs correctly reflect the change.


454-458: Default permission documentation verified
The allow-list-supported-model entry is present in schema.json and in plugins/local-llm/permissions/default.toml. No further changes needed.

plugins/local-llm/src/commands.rs (2)

1-1: Import update looks correct.

Bringing ModelInfo and SupportedModel into scope matches the new return type and payload.


13-33: No action required: ModelInfo.key is already the correct SupportedModel union type

The generated TypeScript binding in plugins/local-llm/js/bindings.gen.ts shows:

  • export type ModelInfo = { key: SupportedModel; name: string; description: string; size_bytes: number }
  • export type SupportedModel = "Llama3p2_3bQ4" | "HyprLLM" | "Gemma3_4bQ4"

Since ModelInfo.key is already typed as SupportedModel, no changes are needed.

plugins/local-llm/src/model.rs (2)

4-10: ModelInfo struct is appropriate for frontend consumption.

Fields and derives look right for Specta/Serde. No issues.


20-33: New variant mappings look consistent (filename/URL/size/checksum).

URLs and sizes are defined; checksum method addition is fine for integrity checks if used later.

Please ensure any downloader/validator (if present) uses model_checksum() for integrity to avoid silent corruption.

Also applies to: 36-50

apps/desktop/src/components/settings/views/ai-llm.tsx (5)

29-29: Import toast helper (OK).

Used by the download flow; no unused imports.


114-121: Component/state scaffolding for tab split (OK).

activeTab and llmModelsState are introduced cleanly with minimal changes to downstream props.


159-161: Thin wrapper for model download (OK).

Keeps intent clear and reuses shared flow.


613-621: Tabs Local/Remote wiring is consistent with the new split.

Clean 2-tab UI and translation hooks; no issues.


626-716: Remote view: autonomy control placement makes sense behind custom endpoint.

The gating on customLLMEnabled.data is clear. Minimal comments; aligns with guideline.

Comment on lines +122 to +135
useEffect(() => {
localLlmCommands.listSupportedModel().then((ms) => {
const models: LLMModel[] = ms.map((model) => ({
key: model.key as SupportedModel,
name: model.name,
description: model.description,
available: true,
downloaded: false,
size: `${(model.size_bytes / 1024 / 1024 / 1024).toFixed(2)} GB`,
}));

setLlmModels(models);
});
}, []);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Dynamic model list on mount is good; align TS type to avoid casts.

Mapping the backend ModelInfo to UI state is correct. You currently cast model.key as SupportedModel; prefer having bindings emit key: SupportedModel to drop the cast.


🏁 Script executed:

#!/bin/bash
rg -n $'export\\s+type\\s+ModelInfo\\s*=\\s*\\{\\s*key:\\s*(SupportedModel|string);' plugins/local-llm/js/bindings.gen.ts -n -A 0

Length of output: 229


🏁 Script executed:

#!/bin/bash
# Locate the definition of LLMModel to confirm its key type
rg -n "export type LLMModel" -A 5

Length of output: 35


🏁 Script executed:

#!/bin/bash
# Search for LLMModel type alias or interface in the repository
rg -n "export (type|interface) LLMModel" -n

# Find where LLMModel is imported in the ai-llm.tsx file to locate its definition
rg -n "import .*LLMModel" -n apps/desktop/src/components/settings/views/ai-llm.tsx

Length of output: 223


🏁 Script executed:

#!/bin/bash
rg -n "export interface LLMModel" -A 10 apps/desktop/src/components/settings/components/ai/shared.tsx

Length of output: 333


Drop redundant as SupportedModel cast

Since ModelInfo.key is already typed as SupportedModel, you can remove the unnecessary cast in your mapping:

• File: apps/desktop/src/components/settings/views/ai-llm.tsx

   const models: LLMModel[] = ms.map((model) => ({
-    key: model.key as SupportedModel,
+    key: model.key,
     name: model.name,
     description: model.description,
     available: true,
     downloaded: false,
     size: `${(model.size_bytes / 1024 / 1024 / 1024).toFixed(2)} GB`,
   }));

No changes are needed in the generated bindings—this simply cleans up the UI code.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
useEffect(() => {
localLlmCommands.listSupportedModel().then((ms) => {
const models: LLMModel[] = ms.map((model) => ({
key: model.key as SupportedModel,
name: model.name,
description: model.description,
available: true,
downloaded: false,
size: `${(model.size_bytes / 1024 / 1024 / 1024).toFixed(2)} GB`,
}));
setLlmModels(models);
});
}, []);
useEffect(() => {
localLlmCommands.listSupportedModel().then((ms) => {
const models: LLMModel[] = ms.map((model) => ({
key: model.key,
name: model.name,
description: model.description,
available: true,
downloaded: false,
size: `${(model.size_bytes / 1024 / 1024 / 1024).toFixed(2)} GB`,
}));
setLlmModels(models);
});
}, []);
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/views/ai-llm.tsx around lines 122 to
135, the mapping over ModelInfo objects includes a redundant "as SupportedModel"
cast on model.key; remove the unnecessary cast so you assign key: model.key
directly (since ModelInfo.key is already typed as SupportedModel) and leave the
rest of the mapping unchanged.

Comment on lines +146 to +154
.state::<crate::SharedState>()
.lock()
.await
.external_server
.is_some()
{
return Err(crate::Error::ServerAlreadyRunning);
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Same race for external start

The external branch has the same check-then-set race. Apply the same serialization approach as above.

🤖 Prompt for AI Agents
In plugins/local-stt/src/ext.rs around lines 146 to 154, the code does a
separate check for whether external_server is Some(), creating the same
check-then-set race as the other branch; fix it by performing the check and the
setting of external_server while holding the same SharedState lock in one atomic
section (i.e., acquire the lock, if external_server.is_some() return
ServerAlreadyRunning, otherwise set external_server = Some(...) before releasing
the lock) so that the check-and-set are serialized and no race can occur; if
creating the server requires async work, create the server value first but only
insert it into shared state inside the lock (or use a placeholder/handle) so the
mutation remains atomic.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (4)
apps/desktop/src/locales/en/messages.po (1)

1-15: Set explicit plural rules (required for correct pluralization).

Plural-Forms is empty. Define explicit rules for English to avoid pluralization mishandling at runtime.

Apply this diff in the header:

 "Language: en\n"
-"Plural-Forms: \n"
+"Plural-Forms: nplurals=2; plural=(n != 1);\n"
apps/desktop/src/locales/ko/messages.po (3)

1-15: Set explicit plural rules for Korean (prevents pluralization errors).

Plural-Forms is empty. Korean requires a single plural form.

Apply this diff:

 "Language: ko\n"
-"Plural-Forms: \n"
+"Plural-Forms: nplurals=1; plural=0;\n"

741-743: Fix untranslated string (“Enable Integration”).

This remains in English. Provide a Korean translation.

-msgstr "Enable Integration"
+msgstr "연동 활성화"

1-1691: Review translation placeholders and coverage in ko/messages.po

  • Untranslated entries: 260 msgstr "" entries remain, causing heavy fallback to English.
  • Placeholder mismatches: 24 instances where the source msgid contains tokens (e.g. {days}, {hours}, {0}, {description}) but the corresponding msgstr is empty, dropping those placeholders entirely.

Action items:

  • Provide Korean translations for all msgid entries.
  • Ensure each translation preserves all placeholder tokens exactly as in the source.
♻️ Duplicate comments (3)
apps/desktop/src/locales/en/messages.po (1)

966-973: Prune obsolete LLM/STT keys (#~ entries) from the catalog.

These legacy entries (“LLM - Local/Custom”, “Local Models”, “Speech to Text Model”, “Whisper Model Language Performance (WER)”, etc.) are still present as obsolete (#~). They add noise and bloat to catalogs.

Run to verify remaining obsolete entries:

#!/bin/bash
# Count obsolete strings in en catalog
rg -n '^#~ msgid ' apps/desktop/src/locales/en/messages.po | wc -l
# List a few representative ones
rg -n '^#~ msgid ' apps/desktop/src/locales/en/messages.po | head -n 20

Also applies to: 1003-1006, 1417-1420, 1651-1657

apps/desktop/src/locales/ko/messages.po (1)

966-973: Prune obsolete LLM/STT keys (#~ entries) from ko catalog as well.

Mirror the cleanup in the Korean catalog to remove deprecated strings (LLM - Local/Custom, Local Models, Speech to Text Model, WER modal, etc.) to keep catalogs lean.

Verify remaining obsolete entries:

#!/bin/bash
# Count obsolete strings in ko catalog
rg -n '^#~ msgid ' apps/desktop/src/locales/ko/messages.po | wc -l
rg -n '^#~ msgid ' apps/desktop/src/locales/ko/messages.po | head -n 20

Also applies to: 1003-1006, 1417-1420, 1651-1657

plugins/local-llm/src/model.rs (1)

1-5: Resolved: Gemma3_4bQ4 added to SUPPORTED_MODELS.

This addresses the earlier omission; enumeration flows will now surface Gemma.

🧹 Nitpick comments (5)
apps/desktop/src/locales/en/messages.po (2)

269-275: Avoid generic “{0}” message; prefer named placeholders or explicit IDs.

A bare "{0}" reused across contexts is brittle for translators and QA. Prefer an explicit ID with a meaningful message and named variables (e.g., {label}) or pass the already-localized string directly.

If keeping this pattern, ensure all usages always supply a fully localized value to {0}, not raw English fragments.


994-998: Disambiguate “Local” and “Remote” or add translator comments.

These labels appear in multiple AI settings tabs. To reduce ambiguity in non-English locales, consider:

  • Adding explicit IDs (js-lingui-explicit-id) tied to “processing path” context, or
  • Adding translator notes in code (/* translators: Local processing path */).

Also applies to: 1252-1256

plugins/local-llm/src/model.rs (3)

7-13: Derive Clone and Debug for ModelInfo for ergonomics and logging.

Commonly passed across layers and useful in logs; no runtime cost.

-#[derive(serde::Serialize, serde::Deserialize, specta::Type)]
+#[derive(serde::Serialize, serde::Deserialize, specta::Type, Clone, Debug)]
 pub struct ModelInfo {

23-28: Normalize file_name convention across models (nit).

Llama/Hypr use canonical short names, Gemma uses the remote basename. Consider a consistent convention to avoid user confusion and duplicate downloads if names change upstream.


39-45: Readability: add numeric separators to byte constants (nit).

Improves maintainability when scanning large literals.

-            SupportedModel::Llama3p2_3bQ4 => 2019377440,
-            SupportedModel::HyprLLM => 1107409056,
-            SupportedModel::Gemma3_4bQ4 => 2489894016,
+            SupportedModel::Llama3p2_3bQ4 => 2_019_377_440,
+            SupportedModel::HyprLLM => 1_107_409_056,
+            SupportedModel::Gemma3_4bQ4 => 2_489_894_016,
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 332d7cd and 77e20b9.

📒 Files selected for processing (4)
  • apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (1 hunks)
  • apps/desktop/src/locales/en/messages.po (35 hunks)
  • apps/desktop/src/locales/ko/messages.po (35 hunks)
  • plugins/local-llm/src/model.rs (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/desktop/src/components/settings/components/ai/stt-view-local.tsx
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}

⚙️ CodeRabbit Configuration File

**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".

Files:

  • plugins/local-llm/src/model.rs
🧬 Code Graph Analysis (1)
plugins/local-llm/src/model.rs (2)
plugins/local-llm/js/bindings.gen.ts (2)
  • SupportedModel (59-59)
  • ModelInfo (58-58)
crates/am/src/model.rs (1)
  • model_size (47-53)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: ci (windows, windows-latest)
  • GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (2)
plugins/local-llm/src/model.rs (2)

15-20: Enum extension looks consistent and exhaustively matched.

Variant added and all match arms updated accordingly.


31-36: Verify remote URLs and model sizes

The automated check only confirmed the HyprLLM URL. Please manually verify that the other two endpoints return HTTP 200 and their Content-Length headers match the expected model sizes to avoid integrity mismatches:

• File: plugins/local-llm/src/model.rs (lines 31–36)
– Llama3p2_3bQ4: expect HTTP 200, Content-Length = 2019377440
– Gemma3_4bQ4: expect HTTP 200, Content-Length = 2489894016

If any endpoint is unreachable or the size differs, update the URL or the model_size accordingly.

Comment on lines +472 to 475
#: src/routes/app.settings.tsx:58
msgid "Calendar"
msgstr ""

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Provide minimal high-impact translations to avoid English fallback in the new AI settings UI.

These are core navigation/AI settings strings now exposed prominently. Suggested translations:

-#: src/routes/app.settings.tsx:58
-msgid "Calendar"
-msgstr ""
+msgid "Calendar"
+msgstr "캘린더"

-#: src/routes/app.settings.tsx:52
-msgid "General"
-msgstr ""
+msgid "General"
+msgstr "일반"

-#: src/components/settings/views/integrations.tsx:118
-msgid "Integrations"
-msgstr ""
+msgid "Integrations"
+msgstr "연동"

-#: src/routes/app.settings.tsx:60
-msgid "Notifications"
-msgstr ""
+msgid "Notifications"
+msgstr "알림"

-#: src/routes/app.settings.tsx:64
-msgid "Sound"
-msgstr ""
+msgid "Sound"
+msgstr "소리"

-#: src/routes/app.settings.tsx:56
-msgid "Transcription"
-msgstr ""
+msgid "Transcription"
+msgstr "음성 인식"

-#: src/routes/app.settings.tsx:62
-msgid "Templates"
-msgstr ""
+msgid "Templates"
+msgstr "템플릿"

-#: src/routes/app.settings.tsx:70
-msgid "License"
-msgstr ""
+msgid "License"
+msgstr "라이선스"

-#: src/components/left-sidebar/top-area/settings-button.tsx:101
-msgid "Settings"
-msgstr ""
+msgid "Settings"
+msgstr "설정"

-#: src/components/settings/views/ai-stt.tsx:63
-#: src/components/settings/views/ai-llm.tsx:617
-msgid "Local"
-msgstr ""
+msgid "Local"
+msgstr "로컬"

-#: src/components/settings/views/ai-stt.tsx:66
-#: src/components/settings/views/ai-llm.tsx:620
-msgid "Remote"
-msgstr ""
+msgid "Remote"
+msgstr "원격"

-#: src/components/welcome-modal/index.tsx:350
-#: src/components/welcome-modal/index.tsx:361
-msgid "Back"
-msgstr ""
+msgid "Back"
+msgstr "뒤로"

-#: src/components/welcome-modal/download-progress-view.tsx:258
-msgid "Continue"
-msgstr ""
+msgid "Continue"
+msgstr "계속"

-#: src/components/welcome-modal/welcome-view.tsx:36
-msgid "Get Started"
-msgstr ""
+msgid "Get Started"
+msgstr "시작하기"

-#: src/components/settings/components/ai/llm-custom-view.tsx:569
-msgid "API Base URL"
-msgstr ""
+msgid "API Base URL"
+msgstr "API 기본 URL"

-#: src/components/settings/components/ai/llm-custom-view.tsx:277
-#: src/components/settings/components/ai/llm-custom-view.tsx:373
-#: src/components/settings/components/ai/llm-custom-view.tsx:479
-#: src/components/settings/components/ai/llm-custom-view.tsx:591
-msgid "API Key"
-msgstr ""
+msgid "API Key"
+msgstr "API 키"

-#: src/components/settings/components/ai/llm-custom-view.tsx:297
-#: src/components/settings/components/ai/llm-custom-view.tsx:393
-#: src/components/settings/components/ai/llm-custom-view.tsx:499
-msgid "Model"
-msgstr ""
+msgid "Model"
+msgstr "모델"

-#: src/components/settings/components/ai/llm-custom-view.tsx:616
-msgid "Model Name"
-msgstr ""
+msgid "Model Name"
+msgstr "모델 이름"

-#: src/components/settings/views/ai-llm.tsx:642
-msgid "Autonomy Selector"
-msgstr ""
+msgid "Autonomy Selector"
+msgstr "자율성 설정"

-#: src/components/settings/views/ai-llm.tsx:661
-msgid "Control how autonomous the AI enhancement should be"
-msgstr ""
+msgid "Control how autonomous the AI enhancement should be"
+msgstr "AI 향상의 자율성 수준을 조절합니다"

-#: src/components/settings/views/ai-llm.tsx:656
-msgid "Learn more about AI autonomy"
-msgstr ""
+msgid "Learn more about AI autonomy"
+msgstr "AI 자율성에 대해 자세히 알아보기"

-#: src/components/settings/components/ai/llm-custom-view.tsx:628
-msgid "Loading available models..."
-msgstr ""
+msgid "Loading available models..."
+msgstr "사용 가능한 모델 불러오는 중..."

-#: src/components/welcome-modal/custom-endpoint-view.tsx:550
-msgid "Loading models..."
-msgstr ""
+msgid "Loading models..."
+msgstr "모델 불러오는 중..."

-#: src/components/settings/views/templates.tsx:240
-msgid "Loading templates..."
-msgstr ""
+msgid "Loading templates..."
+msgstr "템플릿 불러오는 중..."

-#: src/components/settings/components/ai/llm-custom-view.tsx:619
-msgid "Select a model from the dropdown (if available) or manually enter the model name required by your endpoint."
-msgstr ""
+msgid "Select a model from the dropdown (if available) or manually enter the model name required by your endpoint."
+msgstr "드롭다운에서 모델을 선택하거나(가능한 경우) 엔드포인트에서 요구하는 모델 이름을 직접 입력하세요."

Also applies to: 825-828, 893-897, 1136-1139, 1413-1416, 1542-1545, 1486-1489, 954-957, 1385-1388, 994-998, 1252-1256, 446-450, 610-612, 833-836, 400-404, 410-416, 1033-1041, 1042-1046, 442-445, 617-620, 942-945, 974-977, 982-985, 986-989, 1333-1336

🤖 Prompt for AI Agents
In apps/desktop/src/locales/ko/messages.po around lines 472-475, the msgid
"Calendar" has an empty msgstr causing English fallback; add the appropriate
Korean translation (e.g., "캘린더") as msgstr. Repeat this for the other referenced
ranges (825-828, 893-897, 1136-1139, 1413-1416, 1542-1545, 1486-1489, 954-957,
1385-1388, 994-998, 1252-1256, 446-450, 610-612, 833-836, 400-404, 410-416,
1033-1041, 1042-1046, 442-445, 617-620, 942-945, 974-977, 982-985, 986-989,
1333-1336) by filling each empty msgstr with the minimal high-impact Korean
translations for the corresponding msgids used in the AI settings UI to prevent
English fallback.

Comment on lines +47 to 53
pub fn model_checksum(&self) -> u64 {
match self {
SupportedModel::Llama3p2_3bQ4 => 2831308098,
SupportedModel::HyprLLM => 4037351144,
SupportedModel::Gemma3_4bQ4 => 2760830291,
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Checksum: document algorithm and verify it’s used; otherwise remove.

State the algorithm/endian (e.g., xxh3_64, crc32, sha256-truncated) and ensure download verification consumes it; if unused, it violates “no unused functions.”

Doc comment suggestion:

 impl SupportedModel {
-    pub fn model_checksum(&self) -> u64 {
+    /// Why: stable integrity check for downloaded model binaries; must match the downloader/verifier.
+    pub fn model_checksum(&self) -> u64 {
         match self {

Search for usage:

If no call sites are found in downloader/verification paths, either wire it in or drop it.


🏁 Script executed:

#!/bin/bash
set -euo pipefail
rg -n "model_checksum" -A 3 -B 3

Length of output: 478


Action Required: Integrate or remove unused model_checksum

The model_checksum method in plugins/local-llm/src/model.rs (lines 47–53) isn’t referenced anywhere in the codebase and thus is dead code. You should either:

  • Wire it into your download/verification logic (e.g., in your downloader module or wherever you validate model binaries) so the checksum actually enforces integrity, or
  • Remove the method entirely to eliminate unused code.

Suggested doc-comment if you choose to keep and integrate it:

 impl SupportedModel {
+    /// Returns the 64-bit xxh3 checksum (little-endian) for verifying downloaded model binaries.
     pub fn model_checksum(&self) -> u64 {
         match self {
             SupportedModel::Llama3p2_3bQ4 => 2831308098,
             SupportedModel::HyprLLM      => 4037351144,
             SupportedModel::Gemma3_4bQ4  => 2760830291,
         }
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn model_checksum(&self) -> u64 {
match self {
SupportedModel::Llama3p2_3bQ4 => 2831308098,
SupportedModel::HyprLLM => 4037351144,
SupportedModel::Gemma3_4bQ4 => 2760830291,
}
}
impl SupportedModel {
/// Returns the 64-bit xxh3 checksum (little-endian) for verifying downloaded model binaries.
pub fn model_checksum(&self) -> u64 {
match self {
SupportedModel::Llama3p2_3bQ4 => 2831308098,
SupportedModel::HyprLLM => 4037351144,
SupportedModel::Gemma3_4bQ4 => 2760830291,
}
}
}
🤖 Prompt for AI Agents
In plugins/local-llm/src/model.rs around lines 47 to 53, the model_checksum
method is unused dead code; either remove it or wire it into the model
download/verification flow. If removing: delete the method and any references to
it (none currently) and run a build. If keeping: update the downloader/installer
to call SupportedModel::model_checksum() after download, compute the downloaded
file's checksum (same u64 algorithm/endianness as these constants), compare and
fail the install if mismatched, and add a short doc-comment above model_checksum
explaining its purpose.

@yujonglee yujonglee merged commit 2c14273 into main Aug 11, 2025
5 of 6 checks passed
@yujonglee yujonglee deleted the new-ai-settings branch August 11, 2025 05:58
This was referenced Aug 11, 2025
@coderabbitai coderabbitai bot mentioned this pull request Sep 7, 2025
@coderabbitai coderabbitai bot mentioned this pull request Oct 19, 2025
@coderabbitai coderabbitai bot mentioned this pull request Nov 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant