Skip to content

Comments

Add hypr-llm as downloadable option#1108

Merged
yujonglee merged 25 commits intomainfrom
hypr-llm-beta
Jul 14, 2025
Merged

Add hypr-llm as downloadable option#1108
yujonglee merged 25 commits intomainfrom
hypr-llm-beta

Conversation

@yujonglee
Copy link
Contributor

No description provided.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 8, 2025

📝 Walkthrough

Walkthrough

This update introduces explicit per-model management for local LLM models, including new commands and permissions to get and set the current model. The backend and frontend now require specifying a model for download and status checks. UI components for displaying ratings and language support were refactored and centralized. Permissions, schemas, and documentation were updated accordingly.

Changes

File(s) Change Summary
apps/desktop/src/components/settings/components/ai/shared.tsx Added RatingDisplay and LanguageDisplay React components for reusable UI.
apps/desktop/src/components/settings/components/ai/stt-view.tsx Removed local RatingDisplay/LanguageDisplay; now imports from shared module.
apps/desktop/src/components/toast/model-download.tsx Uses new currentLlmModel query; passes model to LLM download/status functions.
apps/desktop/src/components/toast/shared.tsx showLlmModelDownloadToast now accepts a model parameter and improved callback handling.
plugins/local-llm/build.rs Added "get_current_model" and "set_current_model" to command list.
plugins/local-llm/js/bindings.gen.ts API methods now require a model argument; added getCurrentModel and setCurrentModel methods.
plugins/local-llm/permissions/autogenerated/commands/get_current_model.toml Added permissions for new get current model command.
plugins/local-llm/permissions/autogenerated/commands/set_current_model.toml Added permissions for new set current model command.
plugins/local-llm/permissions/autogenerated/commands/list_ollama_models.toml Removed permission file for deprecated command.
plugins/local-llm/permissions/autogenerated/reference.md Documented new permissions for get/set current model; updated/renamed others.
plugins/local-llm/permissions/default.toml Added default allow permissions for get/set current model.
plugins/local-llm/permissions/schemas/schema.json Schema updated for new get/set current model permissions; removed old ones.
plugins/local-llm/src/commands.rs Commands now take a model argument; added get_current_model/set_current_model commands.
plugins/local-llm/src/ext.rs Refactored trait to use explicit model arguments for all relevant methods; per-model download tracking.
plugins/local-llm/src/lib.rs Removed local module; added manager/model; changed State struct for per-model download tasks.
plugins/local-llm/src/local/mod.rs Removed module, replaced by direct manager/model modules.
plugins/local-llm/src/model.rs Added HyprLLM to SUPPORTED_MODELS; updated SupportedModel derives.
plugins/local-llm/src/server.rs Changed import path for ModelManager.
apps/desktop/src-tauri/src/ext.rs Updated setup_local_ai to use current model for download check.
apps/desktop/src/components/settings/components/ai/llm-view.tsx Removed LLMView component and related types.
apps/desktop/src/components/settings/components/ai/index.ts Removed centralized re-export file for AI components.
apps/desktop/src/components/settings/views/ai.tsx Refactored LocalAI component with new UI for STT and LLM models, enhanced state and download management.
apps/desktop/src/components/settings/components/index.ts Added export for wer-modal.
apps/desktop/src/components/welcome-modal/model-selection-view.tsx Added React Query client usage for model download toast calls.

Sequence Diagram(s)

LLM Model Download Flow (New/Updated)

sequenceDiagram
    participant UI
    participant ReactQuery
    participant LLMPluginJS
    participant TauriBackend
    participant LocalLlmState

    UI->>ReactQuery: use currentLlmModel()
    ReactQuery->>LLMPluginJS: getCurrentModel()
    LLMPluginJS->>TauriBackend: invoke('get_current_model')
    TauriBackend->>LocalLlmState: get_current_model()
    LocalLlmState-->>TauriBackend: current model
    TauriBackend-->>LLMPluginJS: current model
    LLMPluginJS-->>ReactQuery: current model
    ReactQuery-->>UI: current model

    UI->>LLMPluginJS: isModelDownloaded(model)
    LLMPluginJS->>TauriBackend: invoke('is_model_downloaded', model)
    TauriBackend->>LocalLlmState: is_model_downloaded(model)
    LocalLlmState-->>TauriBackend: status
    TauriBackend-->>LLMPluginJS: status
    LLMPluginJS-->>UI: status

    alt Model not downloaded
        UI->>LLMPluginJS: downloadModel(model, channel)
        LLMPluginJS->>TauriBackend: invoke('download_model', model, channel)
        TauriBackend->>LocalLlmState: download_model(model, channel)
        LocalLlmState-->>TauriBackend: download started
        TauriBackend-->>LLMPluginJS: download started
        LLMPluginJS-->>UI: download started
    end
Loading

Get/Set Current Model

sequenceDiagram
    participant UI
    participant LLMPluginJS
    participant TauriBackend
    participant LocalLlmState

    UI->>LLMPluginJS: getCurrentModel()
    LLMPluginJS->>TauriBackend: invoke('get_current_model')
    TauriBackend->>LocalLlmState: get_current_model()
    LocalLlmState-->>TauriBackend: current model
    TauriBackend-->>LLMPluginJS: current model
    LLMPluginJS-->>UI: current model

    UI->>LLMPluginJS: setCurrentModel(model)
    LLMPluginJS->>TauriBackend: invoke('set_current_model', model)
    TauriBackend->>LocalLlmState: set_current_model(model)
    LocalLlmState-->>TauriBackend: ok
    TauriBackend-->>LLMPluginJS: ok
    LLMPluginJS-->>UI: ok
Loading

Possibly related PRs

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 Clippy (1.86.0)
Updating git repository `https://github.com/RustAudio/cpal`

error: failed to load source for dependency cpal

Caused by:
Unable to update https://github.com/RustAudio/cpal?rev=51c3b43#51c3b43c

Caused by:
failed to create directory /usr/local/git/db/cpal-476cd1dd23dbc279

Caused by:
Permission denied (os error 13)

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (4)
apps/desktop/src/components/settings/components/ai/stt-view.tsx (2)

130-137: Use a controlled RadioGroupdefaultValue will not update after the query resolves

defaultValue is only read on the first render. When currentSTTModel finishes loading (or changes after a mutation) the selected radio will not update, so the UI can drift out of sync with the actual model in use.

-      defaultValue={currentSTTModel.data}
-      onValueChange={(value) => {
-        setCurrentSTTModel.mutate(value as SupportedModel);
-      }}
+      value={currentSTTModel.data ?? ""}
+      onValueChange={(value) => {
+        setCurrentSTTModel.mutate(value as SupportedModel);
+      }}

210-233: Inline try/catch violates the project guideline “No error handling”

Lines 216-233 wrap the download action in a try … catch. The coding-guidelines section for *.{js,ts,tsx} explicitly says “No error handling.”
Please remove the block (or move the handling to a dedicated error boundary / toast util that lives outside the component).

-                          try {
-                            showSttModelDownloadToast(model.model, () => {
-
-                            });
-                          } catch (error) {
-                            console.error(`Error initiating STT model download for ${model.model}:`, error);
-                            setDownloadingModelName(null);
-                          }
+                          showSttModelDownloadToast(model.model, () => {
+
+                          });
apps/desktop/src/components/toast/model-download.tsx (2)

20-36: Fix query enablement logic and potential runtime errors.

The checkForModelDownload query has an inconsistent enablement condition - it's enabled only when currentSttModel.data exists, but it depends on both currentSttModel.data and currentLlmModel.data. This could cause runtime errors with the non-null assertions.

Apply this diff to fix the enablement logic:

  const checkForModelDownload = useQuery({
-    enabled: !!currentSttModel.data,
+    enabled: !!currentSttModel.data && !!currentLlmModel.data,
    queryKey: ["check-model-downloaded"],
    queryFn: async () => {
      const [stt, llm] = await Promise.all([
        localSttCommands.isModelDownloaded(currentSttModel.data!),
        localLlmCommands.isModelDownloaded(currentLlmModel.data!),
      ]);

47-54: Fix potential runtime error with non-null assertion.

The query depends on currentLlmModel.data but doesn't check if it exists before using the non-null assertion.

Apply this diff to fix the enablement condition:

  const llmModelDownloading = useQuery({
-    enabled: !checkForModelDownload.data?.llmModelDownloaded,
+    enabled: !checkForModelDownload.data?.llmModelDownloaded && !!currentLlmModel.data,
    queryKey: ["llm-model-downloading"],
    queryFn: async () => {
      return localLlmCommands.isModelDownloading(currentLlmModel.data!);
    },
🧹 Nitpick comments (1)
apps/desktop/src/components/toast/shared.tsx (1)

84-87: LGTM! Backward-compatible function signature enhancement.

The optional model parameter maintains backward compatibility while enabling model-specific downloads. The default model selection is functional.

Consider making the default model selection more maintainable:

export function showLlmModelDownloadToast(model?: SupportedModelLLM, onComplete?: () => void) {
  const llmChannel = new Channel();
-  const modelToDownload = model || "Llama3p2_3bQ4";
+  const modelToDownload = model || "Llama3p2_3bQ4"; // Consider making this configurable
  localLlmCommands.downloadModel(modelToDownload, llmChannel);
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f1fdb39 and 01ea3b3.

📒 Files selected for processing (18)
  • apps/desktop/src/components/settings/components/ai/shared.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx (1 hunks)
  • apps/desktop/src/components/toast/model-download.tsx (2 hunks)
  • apps/desktop/src/components/toast/shared.tsx (3 hunks)
  • plugins/local-llm/build.rs (1 hunks)
  • plugins/local-llm/js/bindings.gen.ts (2 hunks)
  • plugins/local-llm/permissions/autogenerated/commands/get_current_model.toml (1 hunks)
  • plugins/local-llm/permissions/autogenerated/commands/list_ollama_models.toml (0 hunks)
  • plugins/local-llm/permissions/autogenerated/commands/set_current_model.toml (1 hunks)
  • plugins/local-llm/permissions/autogenerated/reference.md (3 hunks)
  • plugins/local-llm/permissions/default.toml (1 hunks)
  • plugins/local-llm/permissions/schemas/schema.json (3 hunks)
  • plugins/local-llm/src/commands.rs (3 hunks)
  • plugins/local-llm/src/ext.rs (7 hunks)
  • plugins/local-llm/src/lib.rs (4 hunks)
  • plugins/local-llm/src/local/mod.rs (0 hunks)
  • plugins/local-llm/src/model.rs (1 hunks)
  • plugins/local-llm/src/server.rs (1 hunks)
💤 Files with no reviewable changes (2)
  • plugins/local-llm/src/local/mod.rs
  • plugins/local-llm/permissions/autogenerated/commands/list_ollama_models.toml
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{js,ts,tsx,rs}`: 1. No error handling. 2. No unused imports, variables, or functions. 3. For comments, keep it minimal. It should be about "Why", not "What".

**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • plugins/local-llm/src/server.rs
  • plugins/local-llm/build.rs
  • apps/desktop/src/components/toast/model-download.tsx
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx
  • plugins/local-llm/src/model.rs
  • apps/desktop/src/components/toast/shared.tsx
  • apps/desktop/src/components/settings/components/ai/shared.tsx
  • plugins/local-llm/js/bindings.gen.ts
  • plugins/local-llm/src/lib.rs
  • plugins/local-llm/src/commands.rs
  • plugins/local-llm/src/ext.rs
🧬 Code Graph Analysis (4)
plugins/local-llm/src/model.rs (1)
plugins/local-llm/js/bindings.gen.ts (1)
  • SupportedModel (55-55)
plugins/local-llm/js/bindings.gen.ts (1)
plugins/local-stt/js/bindings.gen.ts (1)
  • SupportedModel (69-69)
plugins/local-llm/src/commands.rs (2)
plugins/local-llm/js/bindings.gen.ts (1)
  • SupportedModel (55-55)
plugins/local-llm/src/ext.rs (8)
  • is_model_downloading (23-23)
  • is_model_downloading (47-54)
  • download_model (18-22)
  • download_model (83-122)
  • get_current_model (15-15)
  • get_current_model (162-166)
  • set_current_model (16-16)
  • set_current_model (169-173)
plugins/local-llm/src/ext.rs (5)
plugins/local-llm/src/commands.rs (7)
  • start_server (57-59)
  • stop_server (63-65)
  • get_current_model (76-80)
  • set_current_model (84-89)
  • download_model (45-53)
  • is_model_downloading (36-41)
  • is_model_downloaded (25-32)
plugins/local-stt/src/ext.rs (20)
  • start_server (17-17)
  • start_server (93-119)
  • stop_server (18-18)
  • stop_server (122-130)
  • get_current_model (19-19)
  • get_current_model (251-255)
  • set_current_model (20-20)
  • set_current_model (258-262)
  • download_model (29-33)
  • download_model (133-172)
  • is_model_downloading (35-35)
  • is_model_downloading (241-248)
  • is_model_downloaded (36-39)
  • is_model_downloaded (64-82)
  • state (57-57)
  • state (86-86)
  • state (112-112)
  • state (123-123)
  • state (162-162)
  • state (242-242)
plugins/local-llm/js/bindings.gen.ts (1)
  • SupportedModel (55-55)
crates/file/src/lib.rs (2)
  • file_size (54-57)
  • download_file_with_callback (22-52)
plugins/local-llm/src/manager.rs (1)
  • new (23-35)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: ci (windows, windows-latest)
  • GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (30)
plugins/local-llm/src/server.rs (1)

22-22: LGTM - Clean import path update

The import path change correctly reflects the module restructuring where ModelManager was moved from the local module to the crate root.

plugins/local-llm/build.rs (1)

9-10: LGTM - New commands added correctly

The addition of "get_current_model" and "set_current_model" commands is consistent with the plugin's build configuration pattern and aligns with the new model management functionality.

plugins/local-llm/permissions/default.toml (1)

12-13: LGTM - Permissions added correctly

The new permissions "allow-get-current-model" and "allow-set-current-model" follow the correct naming convention and are properly integrated into the default permissions list.

plugins/local-llm/permissions/autogenerated/commands/get_current_model.toml (1)

1-14: LGTM - Well-structured permission file

The autogenerated permission file follows the correct structure with proper allow/deny entries, appropriate descriptions, and correct schema reference.

plugins/local-llm/permissions/autogenerated/commands/set_current_model.toml (1)

1-14: LGTM - Well-structured permission file

The autogenerated permission file follows the correct structure with proper allow/deny entries, appropriate descriptions, and correct schema reference.

apps/desktop/src/components/toast/model-download.tsx (1)

15-18: LGTM! Consistent pattern with existing STT model query.

The new currentLlmModel query follows the same pattern as currentSttModel and properly integrates with the model-specific architecture.

plugins/local-llm/src/model.rs (3)

1-2: LGTM! Proper expansion of supported models.

The SUPPORTED_MODELS array correctly includes both models and follows the established pattern.


4-8: LGTM! Valuable trait additions for the enum.

The additional derive traits (Debug, Eq, Hash, PartialEq) are valuable additions that enable:

  • Debug for better debugging experience
  • Eq, Hash, PartialEq for using the enum as HashMap keys (needed for per-model download tracking)

14-29: LGTM! Complete implementation for HyprLLM model.

The new model variant is properly implemented across all methods with appropriate metadata:

  • Distinct file name and URL
  • Correct model size for the quantized model
plugins/local-llm/permissions/schemas/schema.json (3)

309-320: LGTM! Proper permission definitions for get_current_model.

The new permissions follow the established pattern with both allow and deny variants, proper descriptions, and consistent JSON structure.


381-392: LGTM! Proper permission definitions for set_current_model.

The new permissions are consistent with the existing permission structure and provide appropriate access control for model management.


418-421: LGTM! Updated default permissions include new commands.

The default permission description correctly includes the new allow-get-current-model and allow-set-current-model permissions.

apps/desktop/src/components/toast/shared.tsx (3)

4-4: LGTM! Proper import aliasing to avoid conflicts.

The import alias SupportedModelLLM prevents naming conflicts with the STT model types.


89-89: LGTM! Unique toast IDs per model.

Including the model name in the toast ID ensures unique toasts for different model downloads.


103-105: LGTM! Improved callback handling.

The conditional invocation of the onComplete callback is properly implemented and maintains the existing behavior.

plugins/local-llm/permissions/autogenerated/reference.md (4)

15-16: LGTM! Default permissions properly updated.

The default permission set correctly includes the new allow-get-current-model and allow-set-current-model permissions.


56-77: LGTM! Complete documentation for get_current_model permissions.

The permission table entries follow the established format and provide clear descriptions for both allow and deny variants.


186-207: LGTM! Updated models_dir permission documentation.

The permission descriptions are consistent with the command changes and maintain the same documentation format.


212-233: LGTM! Complete documentation for set_current_model permissions.

The permission table entries are properly documented with clear descriptions for both allow and deny variants.

plugins/local-llm/src/lib.rs (2)

10-11: Clean module reorganization!

The refactoring from a single local module to separate manager and model modules improves code organization and separation of concerns.

Also applies to: 17-19


28-32: Excellent refactoring for multi-model support!

The change to use HashMap<SupportedModel, JoinHandle> enables proper concurrent download management for multiple models, and the Default trait implementation simplifies state initialization.

plugins/local-llm/src/commands.rs (1)

25-32: Proper implementation of model-specific operations!

The functions correctly accept and forward the model parameter to their respective trait methods, with consistent error handling using map_err.

Also applies to: 36-41, 45-53

plugins/local-llm/src/ext.rs (3)

36-36: Good catch on the directory name!

Changed from what appears to be a test placeholder "ttt" to the proper "llm" directory name.


83-122: Excellent concurrent download management!

The implementation properly handles per-model download tasks with appropriate lifecycle management - aborting existing tasks before starting new ones and tracking them in the HashMap.


125-131: Good safety check before starting server!

Verifying that the current model is downloaded before attempting to start the server prevents runtime errors and provides clear feedback to users.

plugins/local-llm/js/bindings.gen.ts (5)

19-21: LGTM! Model parameter addition aligns with explicit per-model architecture.

The addition of the SupportedModel parameter to isModelDownloaded follows the architectural shift toward explicit model management, eliminating the need for implicit current model handling.


22-24: LGTM! Model parameter addition enables model-specific download status checks.

The addition of the SupportedModel parameter to isModelDownloading allows checking download status for specific models, which is essential for the new per-model management system.


25-27: LGTM! Model parameter addition maintains progress reporting functionality.

The addition of the SupportedModel parameter to downloadModel while retaining the channel parameter ensures both explicit model specification and progress reporting capabilities are maintained.


37-42: LGTM! New model management functions provide essential functionality.

The addition of getCurrentModel and setCurrentModel functions provides the necessary interface for querying and setting the current model, which is crucial for the new explicit per-model architecture.


55-55: No naming inconsistencies found

I confirmed that the SupportedModel type in plugins/local-llm/js/bindings.gen.ts uses the PascalCase value "HyprLLM" and that there are no backend references (Rust or JS/TS) using the slug "hypr-llm". The PR title’s slug-case naming is a separate, human-readable convention and does not need to match the TS type. No changes required.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
plugins/local-llm/permissions/schemas/schema.json (2)

309-320: Enum additions LGTM, but keep alphabetical ordering for maintainability

The new allow-/deny-get-current-model items follow the existing naming convention and schema structure—good work.
Minor nit: throughout this enum we keep the entries alphabetically sorted by the command after the first token (allow-, deny-). Placing these right after download-model breaks that implicit order. Moving them below get-status would make future look-ups and diff reviews easier.


381-392: Consistent naming & structure – consider the same ordering tweak

allow-/deny-set-current-model mirror the pattern correctly.
Same alphabetical-ordering remark as above: dropping them after the other set-*/start-* group would keep the enum tidy.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 01ea3b3 and 6f7d883.

📒 Files selected for processing (4)
  • apps/desktop/src-tauri/src/ext.rs (1 hunks)
  • plugins/local-llm/permissions/autogenerated/reference.md (3 hunks)
  • plugins/local-llm/permissions/default.toml (1 hunks)
  • plugins/local-llm/permissions/schemas/schema.json (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • plugins/local-llm/permissions/default.toml
  • plugins/local-llm/permissions/autogenerated/reference.md
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{js,ts,tsx,rs}`: 1. No error handling. 2. No unused imports, variables, or functions. 3. For comments, keep it minimal. It should be about "Why", not "What".

**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • apps/desktop/src-tauri/src/ext.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (4)
apps/desktop/src-tauri/src/ext.rs (3)

52-52: LGTM: Import statement updated correctly.

The import statement properly adds SupportedModel to align with the new model management approach.


54-56: LGTM: Current model retrieval follows established pattern.

The implementation correctly retrieves the current model with a sensible default fallback (SupportedModel::Llama3p2_3bQ4), matching the pattern used in the local-stt plugin above.


58-58: LGTM: Model parameter correctly passed to API.

The is_model_downloaded call now properly passes the current model parameter, aligning with the updated plugin API that requires explicit model specification.

plugins/local-llm/permissions/schemas/schema.json (1)

418-422: Default-set description: verify it matches the actual default array

The markdown bullet list now includes the two new permissions—great.
Double-check that the generated plugins/local-llm/permissions/autogenerated/reference.md and the default permission TOML still enumerate exactly this list (no more, no less). Mismatches silently break the permission gate at runtime.

yujonglee and others added 17 commits July 14, 2025 00:30
This commit introduces a new AI settings view that allows users to select and download
speech-to-text (STT) and large language models (LLM) for use in the application.

The key changes include:

- Added initial STT and LLM model data with details like name, accuracy, speed, size, and download status.
- Implemented handlers for downloading STT and LLM models, updating the UI accordingly.
- Integrated the new model selection and download functionality into the AI settings view.
- Introduced utility functions to display download progress toasts for STT and LLM models.

These changes provide users with the ability to customize the AI models used in the application,
improving the overall experience and flexibility.
This commit adds the `wer-modal` component to the `settings` module and updates the `index.ts` file to export it. The changes were made to centralize the management of all the settings-related components in a single location.

The `ai/index.ts` file has also been updated to remove the exports for `llm-view`, `stt-view`, and `wer-modal` components, as they are now being exported from the main `index.ts` file.

Additionally, the `model-download.tsx` file has been updated to provide more specific and informative messages to the user when they need to download the STT or LLM models for offline functionality.
The changes made in this commit focus on improving the user interface for selecting speech-to-text (STT) models in the settings section of the desktop application. The key changes are:

1. Reorganize the layout of the STT model options to be more compact and visually appealing.
2. Simplify the header section by removing the unnecessary icon and centering the "Transcribing" title.
3. Add a tooltip with an information icon to provide more context about the STT model selection.
4. Adjust the styling and hover behavior of the STT model options to make the selected model more visually distinct.
5. Optimize the layout to be more responsive and work well on different screen sizes.

These changes aim to enhance the user experience by making the STT model selection process more intuitive and visually appealing, while also providing additional context and information to the user.
This commit adds a log message to the `local-llm` plugin that
prints the name of the model being used for inference. This
provides more visibility into the model being used during
inference requests.

feat(llama): Implement Display trait for ModelName

This commit adds an implementation of the `Display` trait for
the `ModelName` enum in the `llama` crate. This allows the
model name to be easily printed as a string, which is used
in the `local-llm` plugin to log the model being used.
- Add DefaultModelMigrated to track user migration status
- Add LastMigrationVersion for future version-based migrations
- Replace llm.gguf with hypr-llm.gguf in test functions
- Ensures consistency with new default model filename
…tion

- Change default from Llama3p2_3bQ4 to HyprLLM for new users
- Preserve existing users' downloaded model to avoid disruption
- Track migration state to prevent repeated prompts
- Align frontend default with backend model transition
- Ensures consistent user experience for model downloads
This commit introduces several improvements to the AI settings UI and functionality:

- Removes the unused `MicIcon` component from the imports
- Enhances the visual styling of the STT and LLM model cards, including better hover and active states
- Simplifies the logic for displaying the accuracy and speed indicators, removing the unnecessary check for the `downloaded` property
- Improves the layout and responsiveness of the model download buttons, ensuring a consistent user experience

These changes aim to provide a more polished and intuitive interface for managing AI models within the application's settings.
- Add support for the `@hypr/plugin-local-llm` package to handle local LLM model management.
- Update the initial LLM models list to include new model options, such as   Llama 3 (3B, Q4) and HyprLLM v1-v4.
- Implement a `modelDownloadStatus` query to check the download status of   each LLM model and update the UI accordingly.
- Update the `handleLlmModelDownload` function to use the new   `showLlmModelDownloadToast` function from the `@hypr/plugin-local-llm` package.
Improve the LLM model download experience by introducing a callback
function to the `showLlmModelDownloadToast` function. This callback is executed when the download actually completes, allowing us to update the UI and set the selected LLM model immediately after the download finishes, rather than assuming completion after the toast
is dismissed.

Additionally, we update the `downloadingModels` set to remove the
model key once the download is complete, providing a more accurate
representation of the download status.
@ComputelessComputer ComputelessComputer linked an issue Jul 14, 2025 that may be closed by this pull request
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (4)
plugins/local-llm/src/ext.rs (2)

83-89: Remove unnecessary clone operation

The clone on line 88 is unnecessary since the model can be moved directly into the async block.

 async fn download_model(
     &self,
     model: crate::SupportedModel,
     channel: Channel<i8>,
 ) -> Result<(), crate::Error> {
-    let m = model.clone();
-    let path = self.models_dir().join(m.file_name());
+    let path = self.models_dir().join(model.file_name());
+    let model_url = model.model_url();

Then update line 105:

-            if let Err(e) = download_file_with_callback(m.model_url(), path, callback).await {
+            if let Err(e) = download_file_with_callback(model_url, path, callback).await {

125-131: Improve error message with model details

The error message could be more descriptive by including which model is not downloaded.

 async fn start_server(&self) -> Result<String, crate::Error> {
     let current_model = self.get_current_model()?;

     if !self.is_model_downloaded(&current_model).await? {
-        return Err(crate::Error::ModelNotDownloaded);
+        return Err(crate::Error::ModelNotDownloaded(format!(
+            "Model '{}' is not downloaded", 
+            current_model.file_name()
+        )));
     }
apps/desktop/src/components/settings/views/ai.tsx (2)

171-173: Implement the TODO for showing file location

The TODO comment indicates missing functionality.

Would you like me to implement the functionality to open the models directory in the file explorer? This would involve using Tauri's shell API to open the folder.

const handleShowFileLocation = async (modelKey: string) => {
  const { open } = await import('@tauri-apps/api/shell');
  const { appDataDir } = await import('@tauri-apps/api/path');
  
  const modelsPath = await appDataDir();
  await open(`${modelsPath}/ttt`);
};

281-693: Consider splitting this large component into smaller, focused components

At 695 lines, this component is quite large and handles multiple responsibilities. Consider extracting sections into separate components for better maintainability.

Extract the following into separate components:

  1. TranscribingSection (lines 283-428)
  2. EnhancingSection (lines 430-540)
  3. CustomEndpointSection (lines 542-686)

Example structure:

// TranscribingSection.tsx
export function TranscribingSection({ 
  models, 
  selectedModel, 
  onModelSelect, 
  onDownload,
  downloadingModels 
}) {
  // STT model selection UI
}

// Then in the main component:
<TranscribingSection
  models={sttModels}
  selectedModel={selectedSTTModel}
  onModelSelect={setSelectedSTTModel}
  onDownload={handleModelDownload}
  downloadingModels={downloadingModels}
/>
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6f7d883 and 0198d62.

📒 Files selected for processing (30)
  • apps/desktop/src-tauri/src/ext.rs (1 hunks)
  • apps/desktop/src/components/settings/components/ai/index.ts (0 hunks)
  • apps/desktop/src/components/settings/components/ai/llm-view.tsx (0 hunks)
  • apps/desktop/src/components/settings/components/ai/shared.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/index.ts (1 hunks)
  • apps/desktop/src/components/settings/views/ai.tsx (5 hunks)
  • apps/desktop/src/components/toast/model-download.tsx (3 hunks)
  • apps/desktop/src/components/toast/shared.tsx (3 hunks)
  • apps/desktop/src/locales/en/messages.po (28 hunks)
  • apps/desktop/src/locales/ko/messages.po (28 hunks)
  • crates/file/src/lib.rs (2 hunks)
  • crates/gguf/src/lib.rs (1 hunks)
  • crates/llama/src/lib.rs (2 hunks)
  • crates/whisper-local/src/model.rs (1 hunks)
  • plugins/local-llm/build.rs (1 hunks)
  • plugins/local-llm/js/bindings.gen.ts (2 hunks)
  • plugins/local-llm/permissions/autogenerated/commands/get_current_model.toml (1 hunks)
  • plugins/local-llm/permissions/autogenerated/commands/list_ollama_models.toml (0 hunks)
  • plugins/local-llm/permissions/autogenerated/commands/set_current_model.toml (1 hunks)
  • plugins/local-llm/permissions/autogenerated/reference.md (3 hunks)
  • plugins/local-llm/permissions/default.toml (1 hunks)
  • plugins/local-llm/permissions/schemas/schema.json (3 hunks)
  • plugins/local-llm/src/commands.rs (3 hunks)
  • plugins/local-llm/src/ext.rs (6 hunks)
  • plugins/local-llm/src/lib.rs (4 hunks)
  • plugins/local-llm/src/local/mod.rs (0 hunks)
  • plugins/local-llm/src/model.rs (1 hunks)
  • plugins/local-llm/src/server.rs (2 hunks)
  • plugins/local-llm/src/store.rs (1 hunks)
💤 Files with no reviewable changes (4)
  • plugins/local-llm/permissions/autogenerated/commands/list_ollama_models.toml
  • apps/desktop/src/components/settings/components/ai/index.ts
  • plugins/local-llm/src/local/mod.rs
  • apps/desktop/src/components/settings/components/ai/llm-view.tsx
✅ Files skipped from review due to trivial changes (7)
  • crates/whisper-local/src/model.rs
  • crates/gguf/src/lib.rs
  • apps/desktop/src/components/settings/components/index.ts
  • plugins/local-llm/permissions/autogenerated/commands/set_current_model.toml
  • plugins/local-llm/permissions/autogenerated/commands/get_current_model.toml
  • crates/file/src/lib.rs
  • plugins/local-llm/src/store.rs
🚧 Files skipped from review as they are similar to previous changes (14)
  • plugins/local-llm/build.rs
  • apps/desktop/src-tauri/src/ext.rs
  • plugins/local-llm/permissions/default.toml
  • apps/desktop/src/components/settings/components/ai/shared.tsx
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx
  • plugins/local-llm/permissions/schemas/schema.json
  • apps/desktop/src/components/toast/model-download.tsx
  • plugins/local-llm/src/server.rs
  • plugins/local-llm/src/model.rs
  • plugins/local-llm/src/lib.rs
  • apps/desktop/src/components/toast/shared.tsx
  • plugins/local-llm/permissions/autogenerated/reference.md
  • plugins/local-llm/src/commands.rs
  • plugins/local-llm/js/bindings.gen.ts
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}

Instructions used from:

Sources:
⚙️ CodeRabbit Configuration File

🧬 Code Graph Analysis (1)
apps/desktop/src/components/settings/views/ai.tsx (9)
apps/desktop/src/components/toast/shared.tsx (2)
  • showSttModelDownloadToast (54-82)
  • showLlmModelDownloadToast (84-113)
plugins/local-llm/js/bindings.gen.ts (1)
  • SupportedModel (55-55)
plugins/local-stt/js/bindings.gen.ts (1)
  • SupportedModel (69-69)
packages/ui/src/components/ui/tooltip.tsx (3)
  • Tooltip (28-28)
  • TooltipTrigger (28-28)
  • TooltipContent (28-28)
packages/ui/src/components/ui/button.tsx (1)
  • Button (37-89)
packages/ui/src/lib/utils.ts (1)
  • cn (4-6)
packages/ui/src/components/ui/form.tsx (7)
  • Form (158-158)
  • FormField (158-158)
  • FormItem (158-158)
  • FormLabel (158-158)
  • FormDescription (158-158)
  • FormControl (158-158)
  • FormMessage (158-158)
packages/ui/src/components/ui/input.tsx (1)
  • Input (22-22)
packages/ui/src/components/ui/select.tsx (5)
  • Select (174-174)
  • SelectTrigger (174-174)
  • SelectValue (174-174)
  • SelectContent (174-174)
  • SelectItem (174-174)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: ci
  • GitHub Check: ci (macos, macos-latest)
  • GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (4)
crates/llama/src/lib.rs (2)

32-40: LGTM! Clean Display implementation.

The Display trait implementation for ModelName is straightforward and correctly handles all enum variants with appropriate formatting.


387-387: LGTM! Test path updated correctly.

The model filename change from "ttt/llm.gguf" to "ttt/hypr-llm.gguf" aligns with the broader changes across crates to support the new HyprLLM model.

apps/desktop/src/locales/ko/messages.po (1)

1-1258: LGTM!

The localization file correctly updates source references to match the refactored code structure. No translation changes required.

apps/desktop/src/locales/en/messages.po (1)

881-883: LGTM!

The localization file correctly adds the new "Open finder view" translation and updates source references to match the refactored code.

- Add optional QueryClient parameter to showSttModelDownloadToast and showLlmModelDownloadToast
- Add React Query cache invalidation when downloads start
- Invalidate stt-model-downloading, llm-model-downloading, and check-model-downloaded queries
- Prevents duplicate toast notifications by syncing cache with download state
- Add useQueryClient hook to LocalAI component
- Pass queryClient to showSttModelDownloadToast and showLlmModelDownloadToast calls
- Ensures proper cache invalidation when downloads are initiated from settings
- Add useQueryClient hook to ModelDownloadNotification component
- Pass queryClient to toast function calls when download button is clicked
- Prevents toast loop when downloads are started from notification
- Add useQueryClient hook to ModelSelectionView component
- Pass queryClient to toast function calls in handleContinue
- Ensures proper cache sync when downloads are initiated during onboarding
- Pass existing queryClient instance to showSttModelDownloadToast call
- Completes cache invalidation integration for STT model downloads"
i18n
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (4)
apps/desktop/src/components/settings/views/ai.tsx (4)

50-121: Extract hardcoded model data to configuration

The hardcoded model data should be moved to a configuration file or fetched from an API for better maintainability.


132-141: Replace fragile string-based type detection with proper type checking

Using string prefix "Quantized" to determine model type is fragile and error-prone. The unsafe type cast on line 140 should also be avoided.


156-170: Fix unsafe type cast in handleLlmModelDownload

The type cast on line 159 is unsafe and could cause runtime errors.


205-218: Make model download status checks scalable

The hardcoded model keys in the download status check won't scale as more models are added.

🧹 Nitpick comments (1)
apps/desktop/src/components/settings/views/ai.tsx (1)

172-174: Implement missing file location functionality

The TODO comment indicates incomplete functionality for opening models in finder.

Complete the implementation for the file location functionality:

 const handleShowFileLocation = async (modelKey: string) => {
-  // TODO: Implement opening models in finder functionality
+  try {
+    // Implement platform-specific file location opening
+    // This might require a Tauri command or similar native integration
+    await commands.showModelInFinder(modelKey);
+  } catch (error) {
+    console.error('Failed to show file location:', error);
+  }
 };

Would you like me to help implement this functionality or create an issue to track this task?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f63fed4 and 58f4de5.

📒 Files selected for processing (19)
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx (2 hunks)
  • apps/desktop/src/components/settings/views/ai.tsx (5 hunks)
  • apps/desktop/src/components/toast/model-download.tsx (4 hunks)
  • apps/desktop/src/components/toast/shared.tsx (4 hunks)
  • apps/desktop/src/components/welcome-modal/model-selection-view.tsx (3 hunks)
  • apps/desktop/src/locales/en/messages.po (19 hunks)
  • apps/desktop/src/locales/ko/messages.po (19 hunks)
  • crates/llama/src/lib.rs (2 hunks)
  • plugins/local-llm/build.rs (1 hunks)
  • plugins/local-llm/permissions/autogenerated/commands/restart_server.toml (1 hunks)
  • plugins/local-llm/permissions/autogenerated/reference.md (3 hunks)
  • plugins/local-llm/permissions/default.toml (1 hunks)
  • plugins/local-llm/permissions/schemas/schema.json (3 hunks)
  • plugins/local-llm/src/server.rs (2 hunks)
  • plugins/local-stt/build.rs (1 hunks)
  • plugins/local-stt/permissions/autogenerated/commands/restart_server.toml (1 hunks)
  • plugins/local-stt/permissions/autogenerated/reference.md (2 hunks)
  • plugins/local-stt/permissions/default.toml (1 hunks)
  • plugins/local-stt/permissions/schemas/schema.json (2 hunks)
✅ Files skipped from review due to trivial changes (4)
  • plugins/local-llm/permissions/autogenerated/commands/restart_server.toml
  • plugins/local-stt/permissions/autogenerated/reference.md
  • plugins/local-stt/permissions/autogenerated/commands/restart_server.toml
  • plugins/local-stt/permissions/schemas/schema.json
🚧 Files skipped from review as they are similar to previous changes (12)
  • plugins/local-llm/src/server.rs
  • plugins/local-llm/build.rs
  • crates/llama/src/lib.rs
  • plugins/local-llm/permissions/default.toml
  • apps/desktop/src/components/welcome-modal/model-selection-view.tsx
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx
  • apps/desktop/src/locales/en/messages.po
  • apps/desktop/src/components/toast/model-download.tsx
  • apps/desktop/src/locales/ko/messages.po
  • plugins/local-llm/permissions/schemas/schema.json
  • apps/desktop/src/components/toast/shared.tsx
  • plugins/local-llm/permissions/autogenerated/reference.md
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}

Instructions used from:

Sources:
⚙️ CodeRabbit Configuration File

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: ci (windows, windows-latest)
  • GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (5)
plugins/local-stt/build.rs (1)

10-10: LGTM!

The addition of "restart_server" to the COMMANDS array is correctly implemented and follows the existing pattern.

plugins/local-stt/permissions/default.toml (1)

11-11: LGTM!

The addition of "allow-restart-server" permission is correctly implemented and aligns with the "restart_server" command added in build.rs.

apps/desktop/src/components/settings/views/ai.tsx (3)

283-429: STT models UI section looks well-structured

The UI implementation for STT models follows consistent patterns with proper state management and accessibility considerations.


431-540: LLM models UI section maintains good consistency

The LLM models UI follows the same patterns as the STT section with appropriate conditional rendering and state management.


543-687: Custom endpoint form implementation is robust

The form handling includes proper validation, conditional rendering, and state synchronization. The integration with React Hook Form and the custom LLM settings is well-implemented.

@yujonglee yujonglee merged commit e8bfe94 into main Jul 14, 2025
8 checks passed
@yujonglee yujonglee deleted the hypr-llm-beta branch July 14, 2025 18:34
@coderabbitai coderabbitai bot mentioned this pull request Aug 11, 2025
@coderabbitai coderabbitai bot mentioned this pull request Nov 11, 2025
@coderabbitai coderabbitai bot mentioned this pull request Dec 5, 2025
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Download model toast pops up after clicking on download Revamp AI tab

2 participants