Conversation
📝 WalkthroughWalkthroughThis update introduces explicit per-model management for local LLM models, including new commands and permissions to get and set the current model. The backend and frontend now require specifying a model for download and status checks. UI components for displaying ratings and language support were refactored and centralized. Permissions, schemas, and documentation were updated accordingly. Changes
Sequence Diagram(s)LLM Model Download Flow (New/Updated)sequenceDiagram
participant UI
participant ReactQuery
participant LLMPluginJS
participant TauriBackend
participant LocalLlmState
UI->>ReactQuery: use currentLlmModel()
ReactQuery->>LLMPluginJS: getCurrentModel()
LLMPluginJS->>TauriBackend: invoke('get_current_model')
TauriBackend->>LocalLlmState: get_current_model()
LocalLlmState-->>TauriBackend: current model
TauriBackend-->>LLMPluginJS: current model
LLMPluginJS-->>ReactQuery: current model
ReactQuery-->>UI: current model
UI->>LLMPluginJS: isModelDownloaded(model)
LLMPluginJS->>TauriBackend: invoke('is_model_downloaded', model)
TauriBackend->>LocalLlmState: is_model_downloaded(model)
LocalLlmState-->>TauriBackend: status
TauriBackend-->>LLMPluginJS: status
LLMPluginJS-->>UI: status
alt Model not downloaded
UI->>LLMPluginJS: downloadModel(model, channel)
LLMPluginJS->>TauriBackend: invoke('download_model', model, channel)
TauriBackend->>LocalLlmState: download_model(model, channel)
LocalLlmState-->>TauriBackend: download started
TauriBackend-->>LLMPluginJS: download started
LLMPluginJS-->>UI: download started
end
Get/Set Current ModelsequenceDiagram
participant UI
participant LLMPluginJS
participant TauriBackend
participant LocalLlmState
UI->>LLMPluginJS: getCurrentModel()
LLMPluginJS->>TauriBackend: invoke('get_current_model')
TauriBackend->>LocalLlmState: get_current_model()
LocalLlmState-->>TauriBackend: current model
TauriBackend-->>LLMPluginJS: current model
LLMPluginJS-->>UI: current model
UI->>LLMPluginJS: setCurrentModel(model)
LLMPluginJS->>TauriBackend: invoke('set_current_model', model)
TauriBackend->>LocalLlmState: set_current_model(model)
LocalLlmState-->>TauriBackend: ok
TauriBackend-->>LLMPluginJS: ok
LLMPluginJS-->>UI: ok
Possibly related PRs
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Clippy (1.86.0)error: failed to load source for dependency Caused by: Caused by: Caused by: ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Actionable comments posted: 1
🔭 Outside diff range comments (4)
apps/desktop/src/components/settings/components/ai/stt-view.tsx (2)
130-137: Use a controlledRadioGroup–defaultValuewill not update after the query resolves
defaultValueis only read on the first render. WhencurrentSTTModelfinishes loading (or changes after a mutation) the selected radio will not update, so the UI can drift out of sync with the actual model in use.- defaultValue={currentSTTModel.data} - onValueChange={(value) => { - setCurrentSTTModel.mutate(value as SupportedModel); - }} + value={currentSTTModel.data ?? ""} + onValueChange={(value) => { + setCurrentSTTModel.mutate(value as SupportedModel); + }}
210-233: Inlinetry/catchviolates the project guideline “No error handling”Lines 216-233 wrap the download action in a
try … catch. The coding-guidelines section for*.{js,ts,tsx}explicitly says “No error handling.”
Please remove the block (or move the handling to a dedicated error boundary / toast util that lives outside the component).- try { - showSttModelDownloadToast(model.model, () => { - … - }); - } catch (error) { - console.error(`Error initiating STT model download for ${model.model}:`, error); - setDownloadingModelName(null); - } + showSttModelDownloadToast(model.model, () => { + … + });apps/desktop/src/components/toast/model-download.tsx (2)
20-36: Fix query enablement logic and potential runtime errors.The
checkForModelDownloadquery has an inconsistent enablement condition - it's enabled only whencurrentSttModel.dataexists, but it depends on bothcurrentSttModel.dataandcurrentLlmModel.data. This could cause runtime errors with the non-null assertions.Apply this diff to fix the enablement logic:
const checkForModelDownload = useQuery({ - enabled: !!currentSttModel.data, + enabled: !!currentSttModel.data && !!currentLlmModel.data, queryKey: ["check-model-downloaded"], queryFn: async () => { const [stt, llm] = await Promise.all([ localSttCommands.isModelDownloaded(currentSttModel.data!), localLlmCommands.isModelDownloaded(currentLlmModel.data!), ]);
47-54: Fix potential runtime error with non-null assertion.The query depends on
currentLlmModel.databut doesn't check if it exists before using the non-null assertion.Apply this diff to fix the enablement condition:
const llmModelDownloading = useQuery({ - enabled: !checkForModelDownload.data?.llmModelDownloaded, + enabled: !checkForModelDownload.data?.llmModelDownloaded && !!currentLlmModel.data, queryKey: ["llm-model-downloading"], queryFn: async () => { return localLlmCommands.isModelDownloading(currentLlmModel.data!); },
🧹 Nitpick comments (1)
apps/desktop/src/components/toast/shared.tsx (1)
84-87: LGTM! Backward-compatible function signature enhancement.The optional
modelparameter maintains backward compatibility while enabling model-specific downloads. The default model selection is functional.Consider making the default model selection more maintainable:
export function showLlmModelDownloadToast(model?: SupportedModelLLM, onComplete?: () => void) { const llmChannel = new Channel(); - const modelToDownload = model || "Llama3p2_3bQ4"; + const modelToDownload = model || "Llama3p2_3bQ4"; // Consider making this configurable localLlmCommands.downloadModel(modelToDownload, llmChannel);
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
apps/desktop/src/components/settings/components/ai/shared.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/stt-view.tsx(1 hunks)apps/desktop/src/components/toast/model-download.tsx(2 hunks)apps/desktop/src/components/toast/shared.tsx(3 hunks)plugins/local-llm/build.rs(1 hunks)plugins/local-llm/js/bindings.gen.ts(2 hunks)plugins/local-llm/permissions/autogenerated/commands/get_current_model.toml(1 hunks)plugins/local-llm/permissions/autogenerated/commands/list_ollama_models.toml(0 hunks)plugins/local-llm/permissions/autogenerated/commands/set_current_model.toml(1 hunks)plugins/local-llm/permissions/autogenerated/reference.md(3 hunks)plugins/local-llm/permissions/default.toml(1 hunks)plugins/local-llm/permissions/schemas/schema.json(3 hunks)plugins/local-llm/src/commands.rs(3 hunks)plugins/local-llm/src/ext.rs(7 hunks)plugins/local-llm/src/lib.rs(4 hunks)plugins/local-llm/src/local/mod.rs(0 hunks)plugins/local-llm/src/model.rs(1 hunks)plugins/local-llm/src/server.rs(1 hunks)
💤 Files with no reviewable changes (2)
- plugins/local-llm/src/local/mod.rs
- plugins/local-llm/permissions/autogenerated/commands/list_ollama_models.toml
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{js,ts,tsx,rs}`: 1. No error handling. 2. No unused imports, variables, or functions. 3. For comments, keep it minimal. It should be about "Why", not "What".
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
⚙️ Source: CodeRabbit Configuration File
List of files the instruction was applied to:
plugins/local-llm/src/server.rsplugins/local-llm/build.rsapps/desktop/src/components/toast/model-download.tsxapps/desktop/src/components/settings/components/ai/stt-view.tsxplugins/local-llm/src/model.rsapps/desktop/src/components/toast/shared.tsxapps/desktop/src/components/settings/components/ai/shared.tsxplugins/local-llm/js/bindings.gen.tsplugins/local-llm/src/lib.rsplugins/local-llm/src/commands.rsplugins/local-llm/src/ext.rs
🧬 Code Graph Analysis (4)
plugins/local-llm/src/model.rs (1)
plugins/local-llm/js/bindings.gen.ts (1)
SupportedModel(55-55)
plugins/local-llm/js/bindings.gen.ts (1)
plugins/local-stt/js/bindings.gen.ts (1)
SupportedModel(69-69)
plugins/local-llm/src/commands.rs (2)
plugins/local-llm/js/bindings.gen.ts (1)
SupportedModel(55-55)plugins/local-llm/src/ext.rs (8)
is_model_downloading(23-23)is_model_downloading(47-54)download_model(18-22)download_model(83-122)get_current_model(15-15)get_current_model(162-166)set_current_model(16-16)set_current_model(169-173)
plugins/local-llm/src/ext.rs (5)
plugins/local-llm/src/commands.rs (7)
start_server(57-59)stop_server(63-65)get_current_model(76-80)set_current_model(84-89)download_model(45-53)is_model_downloading(36-41)is_model_downloaded(25-32)plugins/local-stt/src/ext.rs (20)
start_server(17-17)start_server(93-119)stop_server(18-18)stop_server(122-130)get_current_model(19-19)get_current_model(251-255)set_current_model(20-20)set_current_model(258-262)download_model(29-33)download_model(133-172)is_model_downloading(35-35)is_model_downloading(241-248)is_model_downloaded(36-39)is_model_downloaded(64-82)state(57-57)state(86-86)state(112-112)state(123-123)state(162-162)state(242-242)plugins/local-llm/js/bindings.gen.ts (1)
SupportedModel(55-55)crates/file/src/lib.rs (2)
file_size(54-57)download_file_with_callback(22-52)plugins/local-llm/src/manager.rs (1)
new(23-35)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (30)
plugins/local-llm/src/server.rs (1)
22-22: LGTM - Clean import path updateThe import path change correctly reflects the module restructuring where
ModelManagerwas moved from thelocalmodule to the crate root.plugins/local-llm/build.rs (1)
9-10: LGTM - New commands added correctlyThe addition of "get_current_model" and "set_current_model" commands is consistent with the plugin's build configuration pattern and aligns with the new model management functionality.
plugins/local-llm/permissions/default.toml (1)
12-13: LGTM - Permissions added correctlyThe new permissions "allow-get-current-model" and "allow-set-current-model" follow the correct naming convention and are properly integrated into the default permissions list.
plugins/local-llm/permissions/autogenerated/commands/get_current_model.toml (1)
1-14: LGTM - Well-structured permission fileThe autogenerated permission file follows the correct structure with proper allow/deny entries, appropriate descriptions, and correct schema reference.
plugins/local-llm/permissions/autogenerated/commands/set_current_model.toml (1)
1-14: LGTM - Well-structured permission fileThe autogenerated permission file follows the correct structure with proper allow/deny entries, appropriate descriptions, and correct schema reference.
apps/desktop/src/components/toast/model-download.tsx (1)
15-18: LGTM! Consistent pattern with existing STT model query.The new
currentLlmModelquery follows the same pattern ascurrentSttModeland properly integrates with the model-specific architecture.plugins/local-llm/src/model.rs (3)
1-2: LGTM! Proper expansion of supported models.The
SUPPORTED_MODELSarray correctly includes both models and follows the established pattern.
4-8: LGTM! Valuable trait additions for the enum.The additional derive traits (
Debug,Eq,Hash,PartialEq) are valuable additions that enable:
Debugfor better debugging experienceEq,Hash,PartialEqfor using the enum as HashMap keys (needed for per-model download tracking)
14-29: LGTM! Complete implementation for HyprLLM model.The new model variant is properly implemented across all methods with appropriate metadata:
- Distinct file name and URL
- Correct model size for the quantized model
plugins/local-llm/permissions/schemas/schema.json (3)
309-320: LGTM! Proper permission definitions for get_current_model.The new permissions follow the established pattern with both allow and deny variants, proper descriptions, and consistent JSON structure.
381-392: LGTM! Proper permission definitions for set_current_model.The new permissions are consistent with the existing permission structure and provide appropriate access control for model management.
418-421: LGTM! Updated default permissions include new commands.The default permission description correctly includes the new
allow-get-current-modelandallow-set-current-modelpermissions.apps/desktop/src/components/toast/shared.tsx (3)
4-4: LGTM! Proper import aliasing to avoid conflicts.The import alias
SupportedModelLLMprevents naming conflicts with the STT model types.
89-89: LGTM! Unique toast IDs per model.Including the model name in the toast ID ensures unique toasts for different model downloads.
103-105: LGTM! Improved callback handling.The conditional invocation of the
onCompletecallback is properly implemented and maintains the existing behavior.plugins/local-llm/permissions/autogenerated/reference.md (4)
15-16: LGTM! Default permissions properly updated.The default permission set correctly includes the new
allow-get-current-modelandallow-set-current-modelpermissions.
56-77: LGTM! Complete documentation for get_current_model permissions.The permission table entries follow the established format and provide clear descriptions for both allow and deny variants.
186-207: LGTM! Updated models_dir permission documentation.The permission descriptions are consistent with the command changes and maintain the same documentation format.
212-233: LGTM! Complete documentation for set_current_model permissions.The permission table entries are properly documented with clear descriptions for both allow and deny variants.
plugins/local-llm/src/lib.rs (2)
10-11: Clean module reorganization!The refactoring from a single
localmodule to separatemanagerandmodelmodules improves code organization and separation of concerns.Also applies to: 17-19
28-32: Excellent refactoring for multi-model support!The change to use
HashMap<SupportedModel, JoinHandle>enables proper concurrent download management for multiple models, and theDefaulttrait implementation simplifies state initialization.plugins/local-llm/src/commands.rs (1)
25-32: Proper implementation of model-specific operations!The functions correctly accept and forward the
modelparameter to their respective trait methods, with consistent error handling usingmap_err.Also applies to: 36-41, 45-53
plugins/local-llm/src/ext.rs (3)
36-36: Good catch on the directory name!Changed from what appears to be a test placeholder "ttt" to the proper "llm" directory name.
83-122: Excellent concurrent download management!The implementation properly handles per-model download tasks with appropriate lifecycle management - aborting existing tasks before starting new ones and tracking them in the HashMap.
125-131: Good safety check before starting server!Verifying that the current model is downloaded before attempting to start the server prevents runtime errors and provides clear feedback to users.
plugins/local-llm/js/bindings.gen.ts (5)
19-21: LGTM! Model parameter addition aligns with explicit per-model architecture.The addition of the
SupportedModelparameter toisModelDownloadedfollows the architectural shift toward explicit model management, eliminating the need for implicit current model handling.
22-24: LGTM! Model parameter addition enables model-specific download status checks.The addition of the
SupportedModelparameter toisModelDownloadingallows checking download status for specific models, which is essential for the new per-model management system.
25-27: LGTM! Model parameter addition maintains progress reporting functionality.The addition of the
SupportedModelparameter todownloadModelwhile retaining thechannelparameter ensures both explicit model specification and progress reporting capabilities are maintained.
37-42: LGTM! New model management functions provide essential functionality.The addition of
getCurrentModelandsetCurrentModelfunctions provides the necessary interface for querying and setting the current model, which is crucial for the new explicit per-model architecture.
55-55: No naming inconsistencies foundI confirmed that the
SupportedModeltype inplugins/local-llm/js/bindings.gen.tsuses the PascalCase value"HyprLLM"and that there are no backend references (Rust or JS/TS) using the slug"hypr-llm". The PR title’s slug-case naming is a separate, human-readable convention and does not need to match the TS type. No changes required.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
plugins/local-llm/permissions/schemas/schema.json (2)
309-320: Enum additions LGTM, but keep alphabetical ordering for maintainabilityThe new
allow-/deny-get-current-modelitems follow the existing naming convention and schema structure—good work.
Minor nit: throughout this enum we keep the entries alphabetically sorted by the command after the first token (allow-,deny-). Placing these right afterdownload-modelbreaks that implicit order. Moving them belowget-statuswould make future look-ups and diff reviews easier.
381-392: Consistent naming & structure – consider the same ordering tweak
allow-/deny-set-current-modelmirror the pattern correctly.
Same alphabetical-ordering remark as above: dropping them after the otherset-*/start-*group would keep the enum tidy.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
apps/desktop/src-tauri/src/ext.rs(1 hunks)plugins/local-llm/permissions/autogenerated/reference.md(3 hunks)plugins/local-llm/permissions/default.toml(1 hunks)plugins/local-llm/permissions/schemas/schema.json(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- plugins/local-llm/permissions/default.toml
- plugins/local-llm/permissions/autogenerated/reference.md
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{js,ts,tsx,rs}`: 1. No error handling. 2. No unused imports, variables, or functions. 3. For comments, keep it minimal. It should be about "Why", not "What".
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
⚙️ Source: CodeRabbit Configuration File
List of files the instruction was applied to:
apps/desktop/src-tauri/src/ext.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (4)
apps/desktop/src-tauri/src/ext.rs (3)
52-52: LGTM: Import statement updated correctly.The import statement properly adds
SupportedModelto align with the new model management approach.
54-56: LGTM: Current model retrieval follows established pattern.The implementation correctly retrieves the current model with a sensible default fallback (
SupportedModel::Llama3p2_3bQ4), matching the pattern used in the local-stt plugin above.
58-58: LGTM: Model parameter correctly passed to API.The
is_model_downloadedcall now properly passes the current model parameter, aligning with the updated plugin API that requires explicit model specification.plugins/local-llm/permissions/schemas/schema.json (1)
418-422: Default-set description: verify it matches the actual default arrayThe markdown bullet list now includes the two new permissions—great.
Double-check that the generatedplugins/local-llm/permissions/autogenerated/reference.mdand the default permission TOML still enumerate exactly this list (no more, no less). Mismatches silently break the permission gate at runtime.
6f7d883 to
0198d62
Compare
This commit introduces a new AI settings view that allows users to select and download speech-to-text (STT) and large language models (LLM) for use in the application. The key changes include: - Added initial STT and LLM model data with details like name, accuracy, speed, size, and download status. - Implemented handlers for downloading STT and LLM models, updating the UI accordingly. - Integrated the new model selection and download functionality into the AI settings view. - Introduced utility functions to display download progress toasts for STT and LLM models. These changes provide users with the ability to customize the AI models used in the application, improving the overall experience and flexibility.
This commit adds the `wer-modal` component to the `settings` module and updates the `index.ts` file to export it. The changes were made to centralize the management of all the settings-related components in a single location. The `ai/index.ts` file has also been updated to remove the exports for `llm-view`, `stt-view`, and `wer-modal` components, as they are now being exported from the main `index.ts` file. Additionally, the `model-download.tsx` file has been updated to provide more specific and informative messages to the user when they need to download the STT or LLM models for offline functionality.
The changes made in this commit focus on improving the user interface for selecting speech-to-text (STT) models in the settings section of the desktop application. The key changes are: 1. Reorganize the layout of the STT model options to be more compact and visually appealing. 2. Simplify the header section by removing the unnecessary icon and centering the "Transcribing" title. 3. Add a tooltip with an information icon to provide more context about the STT model selection. 4. Adjust the styling and hover behavior of the STT model options to make the selected model more visually distinct. 5. Optimize the layout to be more responsive and work well on different screen sizes. These changes aim to enhance the user experience by making the STT model selection process more intuitive and visually appealing, while also providing additional context and information to the user.
This commit adds a log message to the `local-llm` plugin that prints the name of the model being used for inference. This provides more visibility into the model being used during inference requests. feat(llama): Implement Display trait for ModelName This commit adds an implementation of the `Display` trait for the `ModelName` enum in the `llama` crate. This allows the model name to be easily printed as a string, which is used in the `local-llm` plugin to log the model being used.
- Add DefaultModelMigrated to track user migration status - Add LastMigrationVersion for future version-based migrations
- Replace llm.gguf with hypr-llm.gguf in test functions - Ensures consistency with new default model filename
…tion - Change default from Llama3p2_3bQ4 to HyprLLM for new users - Preserve existing users' downloaded model to avoid disruption - Track migration state to prevent repeated prompts
- Align frontend default with backend model transition - Ensures consistent user experience for model downloads
This commit introduces several improvements to the AI settings UI and functionality: - Removes the unused `MicIcon` component from the imports - Enhances the visual styling of the STT and LLM model cards, including better hover and active states - Simplifies the logic for displaying the accuracy and speed indicators, removing the unnecessary check for the `downloaded` property - Improves the layout and responsiveness of the model download buttons, ensuring a consistent user experience These changes aim to provide a more polished and intuitive interface for managing AI models within the application's settings.
- Add support for the `@hypr/plugin-local-llm` package to handle local LLM model management. - Update the initial LLM models list to include new model options, such as Llama 3 (3B, Q4) and HyprLLM v1-v4. - Implement a `modelDownloadStatus` query to check the download status of each LLM model and update the UI accordingly. - Update the `handleLlmModelDownload` function to use the new `showLlmModelDownloadToast` function from the `@hypr/plugin-local-llm` package.
Improve the LLM model download experience by introducing a callback function to the `showLlmModelDownloadToast` function. This callback is executed when the download actually completes, allowing us to update the UI and set the selected LLM model immediately after the download finishes, rather than assuming completion after the toast is dismissed. Additionally, we update the `downloadingModels` set to remove the model key once the download is complete, providing a more accurate representation of the download status.
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (4)
plugins/local-llm/src/ext.rs (2)
83-89: Remove unnecessary clone operationThe clone on line 88 is unnecessary since the model can be moved directly into the async block.
async fn download_model( &self, model: crate::SupportedModel, channel: Channel<i8>, ) -> Result<(), crate::Error> { - let m = model.clone(); - let path = self.models_dir().join(m.file_name()); + let path = self.models_dir().join(model.file_name()); + let model_url = model.model_url();Then update line 105:
- if let Err(e) = download_file_with_callback(m.model_url(), path, callback).await { + if let Err(e) = download_file_with_callback(model_url, path, callback).await {
125-131: Improve error message with model detailsThe error message could be more descriptive by including which model is not downloaded.
async fn start_server(&self) -> Result<String, crate::Error> { let current_model = self.get_current_model()?; if !self.is_model_downloaded(¤t_model).await? { - return Err(crate::Error::ModelNotDownloaded); + return Err(crate::Error::ModelNotDownloaded(format!( + "Model '{}' is not downloaded", + current_model.file_name() + ))); }apps/desktop/src/components/settings/views/ai.tsx (2)
171-173: Implement the TODO for showing file locationThe TODO comment indicates missing functionality.
Would you like me to implement the functionality to open the models directory in the file explorer? This would involve using Tauri's shell API to open the folder.
const handleShowFileLocation = async (modelKey: string) => { const { open } = await import('@tauri-apps/api/shell'); const { appDataDir } = await import('@tauri-apps/api/path'); const modelsPath = await appDataDir(); await open(`${modelsPath}/ttt`); };
281-693: Consider splitting this large component into smaller, focused componentsAt 695 lines, this component is quite large and handles multiple responsibilities. Consider extracting sections into separate components for better maintainability.
Extract the following into separate components:
TranscribingSection(lines 283-428)EnhancingSection(lines 430-540)CustomEndpointSection(lines 542-686)Example structure:
// TranscribingSection.tsx export function TranscribingSection({ models, selectedModel, onModelSelect, onDownload, downloadingModels }) { // STT model selection UI } // Then in the main component: <TranscribingSection models={sttModels} selectedModel={selectedSTTModel} onModelSelect={setSelectedSTTModel} onDownload={handleModelDownload} downloadingModels={downloadingModels} />
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (30)
apps/desktop/src-tauri/src/ext.rs(1 hunks)apps/desktop/src/components/settings/components/ai/index.ts(0 hunks)apps/desktop/src/components/settings/components/ai/llm-view.tsx(0 hunks)apps/desktop/src/components/settings/components/ai/shared.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/stt-view.tsx(1 hunks)apps/desktop/src/components/settings/components/index.ts(1 hunks)apps/desktop/src/components/settings/views/ai.tsx(5 hunks)apps/desktop/src/components/toast/model-download.tsx(3 hunks)apps/desktop/src/components/toast/shared.tsx(3 hunks)apps/desktop/src/locales/en/messages.po(28 hunks)apps/desktop/src/locales/ko/messages.po(28 hunks)crates/file/src/lib.rs(2 hunks)crates/gguf/src/lib.rs(1 hunks)crates/llama/src/lib.rs(2 hunks)crates/whisper-local/src/model.rs(1 hunks)plugins/local-llm/build.rs(1 hunks)plugins/local-llm/js/bindings.gen.ts(2 hunks)plugins/local-llm/permissions/autogenerated/commands/get_current_model.toml(1 hunks)plugins/local-llm/permissions/autogenerated/commands/list_ollama_models.toml(0 hunks)plugins/local-llm/permissions/autogenerated/commands/set_current_model.toml(1 hunks)plugins/local-llm/permissions/autogenerated/reference.md(3 hunks)plugins/local-llm/permissions/default.toml(1 hunks)plugins/local-llm/permissions/schemas/schema.json(3 hunks)plugins/local-llm/src/commands.rs(3 hunks)plugins/local-llm/src/ext.rs(6 hunks)plugins/local-llm/src/lib.rs(4 hunks)plugins/local-llm/src/local/mod.rs(0 hunks)plugins/local-llm/src/model.rs(1 hunks)plugins/local-llm/src/server.rs(2 hunks)plugins/local-llm/src/store.rs(1 hunks)
💤 Files with no reviewable changes (4)
- plugins/local-llm/permissions/autogenerated/commands/list_ollama_models.toml
- apps/desktop/src/components/settings/components/ai/index.ts
- plugins/local-llm/src/local/mod.rs
- apps/desktop/src/components/settings/components/ai/llm-view.tsx
✅ Files skipped from review due to trivial changes (7)
- crates/whisper-local/src/model.rs
- crates/gguf/src/lib.rs
- apps/desktop/src/components/settings/components/index.ts
- plugins/local-llm/permissions/autogenerated/commands/set_current_model.toml
- plugins/local-llm/permissions/autogenerated/commands/get_current_model.toml
- crates/file/src/lib.rs
- plugins/local-llm/src/store.rs
🚧 Files skipped from review as they are similar to previous changes (14)
- plugins/local-llm/build.rs
- apps/desktop/src-tauri/src/ext.rs
- plugins/local-llm/permissions/default.toml
- apps/desktop/src/components/settings/components/ai/shared.tsx
- apps/desktop/src/components/settings/components/ai/stt-view.tsx
- plugins/local-llm/permissions/schemas/schema.json
- apps/desktop/src/components/toast/model-download.tsx
- plugins/local-llm/src/server.rs
- plugins/local-llm/src/model.rs
- plugins/local-llm/src/lib.rs
- apps/desktop/src/components/toast/shared.tsx
- plugins/local-llm/permissions/autogenerated/reference.md
- plugins/local-llm/src/commands.rs
- plugins/local-llm/js/bindings.gen.ts
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
Instructions used from:
Sources:
⚙️ CodeRabbit Configuration File
🧬 Code Graph Analysis (1)
apps/desktop/src/components/settings/views/ai.tsx (9)
apps/desktop/src/components/toast/shared.tsx (2)
showSttModelDownloadToast(54-82)showLlmModelDownloadToast(84-113)plugins/local-llm/js/bindings.gen.ts (1)
SupportedModel(55-55)plugins/local-stt/js/bindings.gen.ts (1)
SupportedModel(69-69)packages/ui/src/components/ui/tooltip.tsx (3)
Tooltip(28-28)TooltipTrigger(28-28)TooltipContent(28-28)packages/ui/src/components/ui/button.tsx (1)
Button(37-89)packages/ui/src/lib/utils.ts (1)
cn(4-6)packages/ui/src/components/ui/form.tsx (7)
Form(158-158)FormField(158-158)FormItem(158-158)FormLabel(158-158)FormDescription(158-158)FormControl(158-158)FormMessage(158-158)packages/ui/src/components/ui/input.tsx (1)
Input(22-22)packages/ui/src/components/ui/select.tsx (5)
Select(174-174)SelectTrigger(174-174)SelectValue(174-174)SelectContent(174-174)SelectItem(174-174)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: ci
- GitHub Check: ci (macos, macos-latest)
- GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (4)
crates/llama/src/lib.rs (2)
32-40: LGTM! Clean Display implementation.The Display trait implementation for ModelName is straightforward and correctly handles all enum variants with appropriate formatting.
387-387: LGTM! Test path updated correctly.The model filename change from
"ttt/llm.gguf"to"ttt/hypr-llm.gguf"aligns with the broader changes across crates to support the new HyprLLM model.apps/desktop/src/locales/ko/messages.po (1)
1-1258: LGTM!The localization file correctly updates source references to match the refactored code structure. No translation changes required.
apps/desktop/src/locales/en/messages.po (1)
881-883: LGTM!The localization file correctly adds the new "Open finder view" translation and updates source references to match the refactored code.
0198d62 to
f63fed4
Compare
- Add optional QueryClient parameter to showSttModelDownloadToast and showLlmModelDownloadToast - Add React Query cache invalidation when downloads start - Invalidate stt-model-downloading, llm-model-downloading, and check-model-downloaded queries - Prevents duplicate toast notifications by syncing cache with download state
- Add useQueryClient hook to LocalAI component - Pass queryClient to showSttModelDownloadToast and showLlmModelDownloadToast calls - Ensures proper cache invalidation when downloads are initiated from settings
- Add useQueryClient hook to ModelDownloadNotification component - Pass queryClient to toast function calls when download button is clicked - Prevents toast loop when downloads are started from notification
- Add useQueryClient hook to ModelSelectionView component - Pass queryClient to toast function calls in handleContinue - Ensures proper cache sync when downloads are initiated during onboarding
- Pass existing queryClient instance to showSttModelDownloadToast call - Completes cache invalidation integration for STT model downloads"
f63fed4 to
55869d8
Compare
55869d8 to
b355043
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (4)
apps/desktop/src/components/settings/views/ai.tsx (4)
50-121: Extract hardcoded model data to configurationThe hardcoded model data should be moved to a configuration file or fetched from an API for better maintainability.
132-141: Replace fragile string-based type detection with proper type checkingUsing string prefix "Quantized" to determine model type is fragile and error-prone. The unsafe type cast on line 140 should also be avoided.
156-170: Fix unsafe type cast in handleLlmModelDownloadThe type cast on line 159 is unsafe and could cause runtime errors.
205-218: Make model download status checks scalableThe hardcoded model keys in the download status check won't scale as more models are added.
🧹 Nitpick comments (1)
apps/desktop/src/components/settings/views/ai.tsx (1)
172-174: Implement missing file location functionalityThe TODO comment indicates incomplete functionality for opening models in finder.
Complete the implementation for the file location functionality:
const handleShowFileLocation = async (modelKey: string) => { - // TODO: Implement opening models in finder functionality + try { + // Implement platform-specific file location opening + // This might require a Tauri command or similar native integration + await commands.showModelInFinder(modelKey); + } catch (error) { + console.error('Failed to show file location:', error); + } };Would you like me to help implement this functionality or create an issue to track this task?
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
apps/desktop/src/components/settings/components/ai/stt-view.tsx(2 hunks)apps/desktop/src/components/settings/views/ai.tsx(5 hunks)apps/desktop/src/components/toast/model-download.tsx(4 hunks)apps/desktop/src/components/toast/shared.tsx(4 hunks)apps/desktop/src/components/welcome-modal/model-selection-view.tsx(3 hunks)apps/desktop/src/locales/en/messages.po(19 hunks)apps/desktop/src/locales/ko/messages.po(19 hunks)crates/llama/src/lib.rs(2 hunks)plugins/local-llm/build.rs(1 hunks)plugins/local-llm/permissions/autogenerated/commands/restart_server.toml(1 hunks)plugins/local-llm/permissions/autogenerated/reference.md(3 hunks)plugins/local-llm/permissions/default.toml(1 hunks)plugins/local-llm/permissions/schemas/schema.json(3 hunks)plugins/local-llm/src/server.rs(2 hunks)plugins/local-stt/build.rs(1 hunks)plugins/local-stt/permissions/autogenerated/commands/restart_server.toml(1 hunks)plugins/local-stt/permissions/autogenerated/reference.md(2 hunks)plugins/local-stt/permissions/default.toml(1 hunks)plugins/local-stt/permissions/schemas/schema.json(2 hunks)
✅ Files skipped from review due to trivial changes (4)
- plugins/local-llm/permissions/autogenerated/commands/restart_server.toml
- plugins/local-stt/permissions/autogenerated/reference.md
- plugins/local-stt/permissions/autogenerated/commands/restart_server.toml
- plugins/local-stt/permissions/schemas/schema.json
🚧 Files skipped from review as they are similar to previous changes (12)
- plugins/local-llm/src/server.rs
- plugins/local-llm/build.rs
- crates/llama/src/lib.rs
- plugins/local-llm/permissions/default.toml
- apps/desktop/src/components/welcome-modal/model-selection-view.tsx
- apps/desktop/src/components/settings/components/ai/stt-view.tsx
- apps/desktop/src/locales/en/messages.po
- apps/desktop/src/components/toast/model-download.tsx
- apps/desktop/src/locales/ko/messages.po
- plugins/local-llm/permissions/schemas/schema.json
- apps/desktop/src/components/toast/shared.tsx
- plugins/local-llm/permissions/autogenerated/reference.md
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
Instructions used from:
Sources:
⚙️ CodeRabbit Configuration File
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (5)
plugins/local-stt/build.rs (1)
10-10: LGTM!The addition of "restart_server" to the COMMANDS array is correctly implemented and follows the existing pattern.
plugins/local-stt/permissions/default.toml (1)
11-11: LGTM!The addition of "allow-restart-server" permission is correctly implemented and aligns with the "restart_server" command added in build.rs.
apps/desktop/src/components/settings/views/ai.tsx (3)
283-429: STT models UI section looks well-structuredThe UI implementation for STT models follows consistent patterns with proper state management and accessibility considerations.
431-540: LLM models UI section maintains good consistencyThe LLM models UI follows the same patterns as the STT section with appropriate conditional rendering and state management.
543-687: Custom endpoint form implementation is robustThe form handling includes proper validation, conditional rendering, and state synchronization. The integration with React Hook Form and the custom LLM settings is well-implemented.
No description provided.