Custom providers (OpenAI, Gemini, OpenRouter) #1228
Conversation
📝 WalkthroughWalkthroughThis update introduces a modular, multi-provider AI configuration system in the desktop application. It adds new React components and TypeScript types for managing local and custom LLM/STT models, expands backend support for multiple AI providers (OpenAI, Gemini, OpenRouter, and custom endpoints), and extends the connector plugin's command, permission, and storage infrastructure to support granular configuration and access control for these providers. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI (LocalAI + Child Components)
participant Backend (Connector Plugin)
User->>UI (LocalAI): Selects "LLM - Custom" tab
UI->>UI: Renders LLMCustomView with provider accordions
User->>UI: Enters API key/model for a provider
UI->>Backend: Calls setXxxApiKey / setXxxModel command
Backend->>Backend: Stores value under StoreKey
Backend-->>UI: Confirms storage
UI->>Backend: Calls setProviderSource with provider name
Backend-->>UI: Confirms provider source set
User->>UI: Switches to "LLM - Local" or "Transcription"
UI->>Backend: Fetches local model info/status
Backend-->>UI: Returns model info
User->>UI: Initiates model download
UI->>Backend: Triggers download handler
Backend-->>UI: Updates download status
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40–60 minutes Possibly related PRs
Suggested reviewers
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. 📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (1)**/*.{js,ts,tsx,rs}⚙️ CodeRabbit Configuration File
Files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
🔇 Additional comments (11)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (1)
apps/desktop/src/components/settings/components/ai/stt-view.tsx (1)
283-283: Make the file manager text platform-aware."Show in Finder" is macOS-specific terminology.
🧹 Nitpick comments (5)
plugins/connector/permissions/autogenerated/commands/get_others_model.toml (1)
5-13: Identifier wording minor nit“others” is a bit ambiguous compared to provider-specific identifiers used elsewhere. If “Others” maps to an actual provider enum variant, disregard; otherwise consider renaming for clarity (e.g.,
get_custom_model).apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)
106-106: Make the file manager text platform-aware."Show in Finder" is macOS-specific. Consider making this text platform-aware for better cross-platform UX.
You could use a helper function to get the appropriate text:
const getFileManagerText = () => { if (navigator.userAgent.includes('Mac')) return 'Show in Finder'; if (navigator.userAgent.includes('Win')) return 'Show in Explorer'; return 'Show in File Manager'; };Then use it in the button:
-Show in Finder +{getFileManagerText()}apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx (2)
61-61: Consider externalizing API key validation patterns.The hardcoded API key prefixes might become outdated and are duplicated across providers.
Create a configuration object for validation patterns:
const providerValidation = { openai: { keyPrefix: 'sk-', keyPattern: /^sk-[a-zA-Z0-9]+$/ }, gemini: { keyPrefix: 'AIza', keyPattern: /^AIza[a-zA-Z0-9]+$/ }, openrouter: { keyPrefix: 'sk-', keyPattern: /^sk-[a-zA-Z0-9]+$/ } };This makes it easier to update patterns and add more sophisticated validation if needed.
Also applies to: 76-76, 91-91
18-38: Consider making model lists configurable or dynamic.Hardcoded model lists will require code changes when providers add or remove models.
Consider:
- Fetching available models from each provider's API
- Storing model lists in a configuration file
- Allowing users to input custom model names
This would make the component more maintainable and future-proof.
apps/desktop/src/components/settings/views/ai.tsx (1)
76-88: Consider making URL validation less restrictiveThe current validation logic enforces
/v1suffix for URLs containing "openai" or "openrouter", which might be too restrictive for custom endpoints. Some providers might use different API versioning schemes or paths.Consider making this validation optional or configurable:
- api_base: z.string().url({ message: "Please enter a valid URL" }).min(1, { message: "URL is required" }).refine( - (value) => { - const v1Needed = ["openai", "openrouter"].some((host) => value.includes(host)); - if (v1Needed && !value.endsWith("/v1")) { - return false; - } - return true; - }, - { message: "Should end with '/v1'" }, - ).refine( + api_base: z.string().url({ message: "Please enter a valid URL" }).min(1, { message: "URL is required" }).refine( (value) => !value.includes("chat/completions"), { message: "`/chat/completions` will be appended automatically" }, ),
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (34)
apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/llm-local-view.tsx(1 hunks)apps/desktop/src/components/settings/components/ai/shared.tsx(2 hunks)apps/desktop/src/components/settings/components/ai/stt-view.tsx(2 hunks)apps/desktop/src/components/settings/views/ai.tsx(7 hunks)apps/desktop/src/locales/en/messages.po(20 hunks)apps/desktop/src/locales/ko/messages.po(20 hunks)plugins/connector/build.rs(1 hunks)plugins/connector/js/bindings.gen.ts(1 hunks)plugins/connector/permissions/autogenerated/commands/get_gemini_api_key.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/get_gemini_model.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/get_openai_model.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/get_openrouter_api_key.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/get_openrouter_model.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/get_others_api_base.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/get_others_api_key.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/get_others_model.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/get_provider_source.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/set_gemini_api_key.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/set_gemini_model.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/set_openai_api_key.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/set_openai_model.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/set_openrouter_api_key.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/set_openrouter_model.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/set_others_api_base.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/set_others_api_key.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/set_others_model.toml(1 hunks)plugins/connector/permissions/autogenerated/commands/set_provider_source.toml(1 hunks)plugins/connector/permissions/autogenerated/reference.md(4 hunks)plugins/connector/permissions/default.toml(1 hunks)plugins/connector/permissions/schemas/schema.json(3 hunks)plugins/connector/src/commands.rs(1 hunks)plugins/connector/src/lib.rs(1 hunks)plugins/connector/src/store.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit Configuration File
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/connector/src/lib.rsplugins/connector/src/store.rsapps/desktop/src/components/settings/components/ai/llm-local-view.tsxplugins/connector/build.rsapps/desktop/src/components/settings/components/ai/llm-custom-view.tsxplugins/connector/js/bindings.gen.tsapps/desktop/src/components/settings/views/ai.tsxapps/desktop/src/components/settings/components/ai/stt-view.tsxapps/desktop/src/components/settings/components/ai/shared.tsxplugins/connector/src/commands.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (macos, macos-latest)
- GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (39)
plugins/connector/permissions/autogenerated/commands/set_others_api_base.toml (2)
1-4: Header & schema path look correct – no concerns
Autogenerated banner and relative path resolve cleanly (../../schemas/schema.json).
5-13: Command Implementation & Registration ConfirmedThe
set_others_api_basefunction is defined inplugins/connector/src/commands.rs(line 221) and registered inplugins/connector/src/lib.rs(line 37). No further changes are required.plugins/connector/permissions/autogenerated/commands/get_gemini_model.toml (2)
1-4: Header & schema path look correct – no concerns
5-13: Commandget_gemini_modelimplementation and binding verified
- Rust implementation found at plugins/connector/src/commands.rs:289
- JS binding exists at plugins/connector/js/bindings.gen.ts:83
No changes required.
plugins/connector/permissions/autogenerated/commands/set_others_model.toml (2)
1-4: Header & schema path look correct – no concerns
5-13: set_others_model Permission Wiring Confirmed
Theset_others_modelcommand is implemented and registered, so the TOML permission entries will be active:
- Implementation:
plugins/connector/src/commands.rs:244- Registration:
plugins/connector/src/lib.rs:39plugins/connector/permissions/autogenerated/commands/get_provider_source.toml (2)
1-4: Header & schema path look correct – no concerns
5-13: get_provider_source command verification passed
Theget_provider_sourcefunction is implemented and exposed correctly:
- Rust implementation found at
plugins/connector/src/commands.rs:163- JS binding confirmed in
plugins/connector/js/bindings.gen.ts:53No further action needed.
plugins/connector/permissions/autogenerated/commands/get_openai_model.toml (2)
1-4: Header & schema path look correct – no concerns
5-13: get_openai_model is implemented & exposed
Confirmed that the command exists in Rust and is wired up in the JS bindings, so the TOML permissions are in sync.
• plugins/connector/src/commands.rs:266 –pub async fn get_openai_model…
• plugins/connector/js/bindings.gen.ts:77 –TAURI_INVOKE("plugin:connector|get_openai_model")plugins/connector/permissions/autogenerated/commands/set_openai_model.toml (1)
1-14: Permissions entry looks correct – LGTM
Schema path, identifiers, and allow/deny lists follow the established pattern. No issues spotted.plugins/connector/permissions/autogenerated/commands/set_openrouter_model.toml (1)
1-14: Permissions entry looks correct – LGTM
Consistent naming, schema reference, and command lists. Good to merge.plugins/connector/permissions/autogenerated/commands/set_provider_source.toml (1)
1-14: Permissions entry looks correct – LGTM
All fields adhere to the conventions used across the permission set.plugins/connector/permissions/autogenerated/commands/get_others_api_base.toml (1)
1-14: Permissions entry looks correct – LGTM
Unique identifiers and command mapping are sound.plugins/connector/permissions/autogenerated/commands/set_gemini_api_key.toml (1)
1-14: Permissions entry looks correct – LGTM
Matches the schema path and naming conventions; no further action needed.plugins/connector/permissions/autogenerated/commands/get_gemini_api_key.toml (1)
5-13: Pattern consistent – no blocking issuesThe allow/deny dual-entry pattern and identifier naming are consistent with existing autogenerated permission files. Schema reference and descriptions look correct.
plugins/connector/permissions/autogenerated/commands/set_gemini_model.toml (1)
5-13: Matches established permission-file conventionFile follows the standard structure (schema link, paired allow/deny sections, clear identifiers). No action required.
plugins/connector/permissions/autogenerated/commands/get_openrouter_model.toml (1)
5-13: Structure and identifiers look goodConforms to the autogenerated pattern; identifiers and command names are accurate.
plugins/connector/permissions/autogenerated/commands/get_openrouter_api_key.toml (1)
5-13: No issues foundSchema path, allow/deny entries, and descriptions align with the rest of the permissions set.
plugins/connector/permissions/autogenerated/commands/set_openai_api_key.toml (1)
1-13: File correctly generated – no issues detectedSchema reference, identifiers, descriptions and allow/deny arrays follow the established pattern used across the permission set.
Nothing to change.plugins/connector/permissions/autogenerated/commands/set_others_api_key.toml (1)
1-13: Consistent permission stubStructure and naming match the existing convention; looks good.
plugins/connector/permissions/autogenerated/commands/set_openrouter_api_key.toml (1)
1-13: Permission file aligns with conventionThe allow/deny entries are in place and correctly named.
plugins/connector/permissions/autogenerated/commands/get_others_api_key.toml (1)
1-13: LGTMCorrect schema reference and symmetric allow/deny blocks present.
plugins/connector/src/store.rs (1)
14-20: Verify downstream handling of newStoreKeyvariantsSeven variants were appended. Ensure:
- Every new key is registered in all persistence helpers and JS bindings (
bindings.gen.ts) sospectacan export them.- Migration logic (if any) tolerates the new discriminants when reading older persisted data; otherwise, add version-gated fallback.
No code issues here, just a reminder to confirm the end-to-end flow.
plugins/connector/src/lib.rs (1)
29-47: LGTM! Well-structured command registration for multi-provider AI support.The new command registrations follow consistent naming conventions and properly support the multi-provider AI configuration feature. The getter/setter pattern for API keys, models, and provider sources is well-implemented across OpenAI, Gemini, OpenRouter, and custom endpoints.
plugins/connector/permissions/default.toml (1)
15-33: LGTM! Comprehensive permission coverage for new AI provider commands.The new permissions properly correspond to all the commands added in
lib.rsand follow the established naming convention. The granular allow-get- and allow-set- permissions provide appropriate access control for the multi-provider AI configuration features.plugins/connector/build.rs (1)
13-31: LGTM! Complete and accurate command list for build configuration.The COMMANDS array properly includes all new AI provider commands with names that exactly match those registered in
lib.rs. This ensures proper plugin build and binding generation for the multi-provider AI features.apps/desktop/src/locales/en/messages.po (3)
319-321: LGTM! Clear and informative AI provider messages.The new localization messages provide excellent user guidance for different AI providers (OpenAI, Gemini, OpenRouter, Others), with clear descriptions of what each option offers. The messaging is consistent and user-friendly.
Also applies to: 794-796, 1091-1097, 1107-1109, 1441-1447
623-625: LGTM! Appropriate messages for new UI structure.The localization messages properly support the new tabbed interface with clear section names like "Custom Endpoints", "LLM - Local", "LLM - Custom", and "Transcription". These help users navigate the reorganized AI configuration interface.
Also applies to: 903-909, 931-933, 1401-1403
385-395: LGTM! Clear form field guidance.The localization messages for form fields provide helpful guidance for users configuring API keys, base URLs, and model selections. The instructional text clearly explains what information users need to provide for each field.
Also applies to: 713-719, 961-969, 1232-1234
apps/desktop/src/locales/ko/messages.po (1)
319-321: LGTM! Proper structural consistency for Korean localization.The Korean localization file maintains excellent structural consistency with the English version, including all new message IDs for AI providers, UI sections, and form fields. While translations are not yet provided (empty msgstr), the structure is properly prepared for future translation work.
Also applies to: 623-625, 794-796, 903-909, 931-933, 1091-1097, 1107-1109, 1232-1234, 1401-1403, 1441-1447
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)
54-62: Add error handling for async operations in onClick handler.Multiple async operations are performed without error handling, which could lead to inconsistent state if any operation fails.
Wrap the operations in a try-catch block or handle errors individually:
onClick={() => { if (model.available && model.downloaded) { - setSelectedLLMModel(model.key); - localLlmCommands.setCurrentModel(model.key as SupportedModel); - // CRITICAL: Disable custom LLM when local model is selected - setCustomLLMEnabledMutation.mutate(false); - localLlmCommands.restartServer(); + const handleModelSelection = async () => { + try { + setSelectedLLMModel(model.key); + await localLlmCommands.setCurrentModel(model.key as SupportedModel); + // CRITICAL: Disable custom LLM when local model is selected + setCustomLLMEnabledMutation.mutate(false); + await localLlmCommands.restartServer(); + } catch (error) { + console.error("Failed to select model:", error); + // Reset UI state on error + setSelectedLLMModel(currentLLMModel.data || ""); + } + }; + handleModelSelection(); } }}Likely an incorrect or invalid review comment.
apps/desktop/src/components/settings/components/ai/stt-view.tsx (2)
199-205: Add error handling for model selection operations.The async operations in the onClick handler lack error handling.
onClick={() => { if (model.downloaded) { - setSelectedSTTModel(model.key); - localSttCommands.setCurrentModel(model.key as any); - localSttCommands.restartServer(); + const handleModelSelection = async () => { + try { + setSelectedSTTModel(model.key); + await localSttCommands.setCurrentModel(model.key as SupportedModel); + await localSttCommands.restartServer(); + } catch (error) { + console.error("Failed to select STT model:", error); + // Reset selection on error + setSelectedSTTModel(currentSTTModel.data || ""); + } + }; + handleModelSelection(); } }}Likely an incorrect or invalid review comment.
132-155: Improve the download status query implementation.The implementation has hardcoded model names and lacks error handling.
Use the model metadata keys dynamically and add error handling:
const sttModelDownloadStatus = useQuery({ queryKey: ["stt-model-download-status"], queryFn: async () => { - const statusChecks = await Promise.all([ - localSttCommands.isModelDownloaded("QuantizedTiny"), - localSttCommands.isModelDownloaded("QuantizedTinyEn"), - localSttCommands.isModelDownloaded("QuantizedBase"), - localSttCommands.isModelDownloaded("QuantizedBaseEn"), - localSttCommands.isModelDownloaded("QuantizedSmall"), - localSttCommands.isModelDownloaded("QuantizedSmallEn"), - localSttCommands.isModelDownloaded("QuantizedLargeTurbo"), - ]); - return { - "QuantizedTiny": statusChecks[0], - "QuantizedTinyEn": statusChecks[1], - "QuantizedBase": statusChecks[2], - "QuantizedBaseEn": statusChecks[3], - "QuantizedSmall": statusChecks[4], - "QuantizedSmallEn": statusChecks[5], - "QuantizedLargeTurbo": statusChecks[6], - } as Record<string, boolean>; + const models = Object.keys(sttModelMetadata) as SupportedModel[]; + const statusChecks = await Promise.allSettled( + models.map(model => localSttCommands.isModelDownloaded(model)) + ); + + return models.reduce((acc, model, index) => { + const result = statusChecks[index]; + acc[model] = result.status === 'fulfilled' ? result.value : false; + return acc; + }, {} as Record<string, boolean>); }, refetchInterval: 3000, + onError: (error) => { + console.error("Failed to check model download status:", error); + } });Likely an incorrect or invalid review comment.
plugins/connector/permissions/schemas/schema.json (1)
333-656: LGTM! Well-structured permission definitions.The new permissions follow a consistent naming pattern and each allow permission has its corresponding deny permission. The structure aligns well with the multi-provider architecture.
plugins/connector/permissions/autogenerated/reference.md (1)
17-36: LGTM! Documentation properly reflects the new permissions.The auto-generated documentation correctly lists all the new permissions for multi-provider support with consistent formatting and clear descriptions.
Also applies to: 128-176, 258-279, 284-331, 336-383, 388-435, 567-823
plugins/connector/js/bindings.gen.ts (1)
1-175: Auto-generated file looks correctThe generated bindings follow the established patterns and are properly typed. No issues found.
apps/desktop/src/components/settings/components/ai/shared.tsx (1)
44-141: Well-structured type definitionsThe new interfaces and types provide excellent type safety and clear contracts for the AI configuration components. Good use of TypeScript features and proper separation of concerns.
plugins/connector/src/commands.rs (1)
104-320: Consistent and well-implemented storage commandsAll new commands follow the established patterns correctly with proper error handling and consistent behavior. The use of
unwrap_or_default()for getters ensures safe handling of missing values.
| useEffect(() => { | ||
| const subscription = openaiForm.watch((values) => { | ||
| // Manual validation: OpenAI key starts with "sk-" and model is selected | ||
| if (values.api_key && values.api_key.startsWith("sk-") && values.model) { | ||
| configureCustomEndpoint({ | ||
| provider: "openai", | ||
| api_base: "", // Will be auto-set | ||
| api_key: values.api_key, | ||
| model: values.model, | ||
| }); | ||
| } | ||
| }); | ||
| return () => subscription.unsubscribe(); | ||
| }, [openaiForm, configureCustomEndpoint]); | ||
|
|
||
| useEffect(() => { | ||
| const subscription = geminiForm.watch((values) => { | ||
| // Manual validation: Gemini key starts with "AIza" and model is selected | ||
| if (values.api_key && values.api_key.startsWith("AIza") && values.model) { | ||
| configureCustomEndpoint({ | ||
| provider: "gemini", | ||
| api_base: "", // Will be auto-set | ||
| api_key: values.api_key, | ||
| model: values.model, | ||
| }); | ||
| } | ||
| }); | ||
| return () => subscription.unsubscribe(); | ||
| }, [geminiForm, configureCustomEndpoint]); | ||
|
|
||
| useEffect(() => { | ||
| const subscription = openrouterForm.watch((values) => { | ||
| // Manual validation: OpenRouter key starts with "sk-" and model is selected | ||
| if (values.api_key && values.api_key.startsWith("sk-") && values.model) { | ||
| configureCustomEndpoint({ | ||
| provider: "openrouter", | ||
| api_base: "", // Will be auto-set | ||
| api_key: values.api_key, | ||
| model: values.model, | ||
| }); | ||
| } | ||
| }); | ||
| return () => subscription.unsubscribe(); | ||
| }, [openrouterForm, configureCustomEndpoint]); | ||
|
|
||
| useEffect(() => { | ||
| const subscription = customForm.watch((values) => { | ||
| // Manual validation: URL and model are present | ||
| if (values.api_base && values.model) { | ||
| try { | ||
| // Basic URL validation | ||
| new URL(values.api_base); | ||
| configureCustomEndpoint({ | ||
| provider: "others", | ||
| api_base: values.api_base, | ||
| api_key: values.api_key, | ||
| model: values.model, | ||
| }); | ||
| } catch { | ||
| // invalid URL | ||
| } | ||
| } | ||
| }); | ||
| return () => subscription.unsubscribe(); | ||
| }, [customForm, configureCustomEndpoint]); |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Consider debouncing form submissions to improve performance.
The form watchers trigger on every change, potentially causing performance issues with frequent validation and configuration calls.
Consider using debouncing:
import { useMemo } from 'react';
import { debounce } from 'lodash'; // or implement your own
// Inside component:
const debouncedConfigure = useMemo(
() => debounce(configureCustomEndpoint, 500),
[configureCustomEndpoint]
);
// Then in useEffect:
useEffect(() => {
const subscription = openaiForm.watch((values) => {
if (values.api_key && values.api_key.startsWith("sk-") && values.model) {
debouncedConfigure({
provider: "openai",
api_base: "",
api_key: values.api_key,
model: values.model,
});
}
});
return () => subscription.unsubscribe();
}, [openaiForm, debouncedConfigure]);🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx around
lines 58 to 122, the form watchers call configureCustomEndpoint on every change,
which may cause performance issues. To fix this, import debounce from lodash and
create a debounced version of configureCustomEndpoint using useMemo with a delay
(e.g., 500ms). Replace direct calls to configureCustomEndpoint inside each form
watcher with calls to the debounced function, and update the useEffect
dependencies accordingly to use the debounced function instead of the original.
| } catch { | ||
| // invalid URL | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Provide user feedback for invalid URLs.
The empty catch block silently ignores invalid URLs without informing the user.
Consider providing user feedback:
} catch {
// invalid URL
+ console.warn("Invalid URL provided:", values.api_base);
+ // Optionally, you could set a form error here
+ // customForm.setError('api_base', { message: 'Please enter a valid URL' });
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch { | |
| // invalid URL | |
| } | |
| } catch { | |
| // invalid URL | |
| console.warn("Invalid URL provided:", values.api_base); | |
| // Optionally, you could set a form error here | |
| // customForm.setError('api_base', { message: 'Please enter a valid URL' }); | |
| } |
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx around
lines 116 to 118, the catch block for invalid URLs is empty and does not provide
any user feedback. Modify the catch block to display an error message or
notification to the user indicating that the URL entered is invalid, ensuring
the user is informed of the issue instead of silently ignoring it.
| <div className="flex items-center gap-2"> | ||
| <svg | ||
| fill="currentColor" | ||
| fill-rule="evenodd" |
There was a problem hiding this comment.
Use camelCase for SVG attributes in React.
React requires camelCase for SVG attributes.
-fill-rule="evenodd"
+fillRule="evenodd"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fill-rule="evenodd" | |
| fillRule="evenodd" |
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx at
line 365, the SVG attribute "fill-rule" should be changed to camelCase as
"fillRule" to comply with React's requirement for SVG attributes.
| const currentLLMModel = useQuery({ | ||
| queryKey: ["current-llm-model"], | ||
| queryFn: () => localLlmCommands.getCurrentModel(), | ||
| }); |
There was a problem hiding this comment.
Add error handling for the query.
The query lacks error handling. If the backend call fails, users won't see any error indication.
Consider handling the error state:
const currentLLMModel = useQuery({
queryKey: ["current-llm-model"],
queryFn: () => localLlmCommands.getCurrentModel(),
+ retry: 3,
+ onError: (error) => {
+ console.error("Failed to fetch current LLM model:", error);
+ }
});Then in the UI, you could show an error state:
if (currentLLMModel.isError) {
return <div className="text-red-500">Failed to load current model</div>;
}🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/llm-local-view.tsx around
lines 22 to 25, the useQuery call for fetching the current LLM model lacks error
handling. To fix this, add a check for the isError property on the query result
and render an error message in the UI when an error occurs, such as returning a
div with a clear error message to inform users that loading the current model
failed.
| onClick={() => { | ||
| if (model.downloaded) { | ||
| setSelectedSTTModel(model.key); | ||
| localSttCommands.setCurrentModel(model.key as any); |
There was a problem hiding this comment.
Avoid using 'as any' type cast.
The as any cast bypasses TypeScript's type safety. Use the proper type instead.
-localSttCommands.setCurrentModel(model.key as any);
+localSttCommands.setCurrentModel(model.key as SupportedModel);If model.key might not always be a valid SupportedModel, add type validation:
if (Object.keys(sttModelMetadata).includes(model.key)) {
localSttCommands.setCurrentModel(model.key as SupportedModel);
}🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/stt-view.tsx at line 202,
avoid using 'as any' to cast model.key. Instead, ensure model.key is a valid
SupportedModel by checking if it exists in sttModelMetadata keys before calling
localSttCommands.setCurrentModel. Use a type guard or conditional to confirm the
type and then cast to SupportedModel safely.
| useEffect(() => { | ||
| const handleMigration = async () => { | ||
| // Skip if no store exists at all | ||
| if (!customLLMConnection.data && !customLLMEnabled.data) { | ||
| return; | ||
| } | ||
|
|
||
| // Check if migration needed (no providerSource exists) | ||
| if (!providerSourceQuery.data && customLLMConnection.data) { | ||
| console.log("Migrating existing user to new provider system..."); | ||
|
|
||
| try { | ||
| // Copy existing custom* fields to others* fields | ||
| if (customLLMConnection.data.api_base) { | ||
| await setOthersApiBaseMutation.mutateAsync(customLLMConnection.data.api_base); | ||
| } | ||
| if (customLLMConnection.data.api_key) { | ||
| await setOthersApiKeyMutation.mutateAsync(customLLMConnection.data.api_key); | ||
| } | ||
| if (getCustomLLMModel.data) { | ||
| await setOthersModelMutation.mutateAsync(getCustomLLMModel.data); | ||
| } | ||
|
|
||
| // Set provider source to 'others' | ||
| await setProviderSourceMutation.mutateAsync("others"); | ||
|
|
||
| console.log("Migration completed successfully"); | ||
| } catch (error) { | ||
| console.error("Migration failed:", error); | ||
| } | ||
| } | ||
| }; | ||
|
|
||
| // Run migration when all queries have loaded | ||
| if ( | ||
| providerSourceQuery.data !== undefined && customLLMConnection.data !== undefined | ||
| && getCustomLLMModel.data !== undefined | ||
| ) { | ||
| handleMigration(); | ||
| } | ||
| }, [providerSourceQuery.data, customLLMConnection.data, getCustomLLMModel.data]); |
There was a problem hiding this comment.
Add safeguards to prevent migration from running multiple times
The migration logic could potentially run multiple times if the queries refetch, and there's no flag to track if migration has already been completed. Additionally, partial failures could leave the system in an inconsistent state.
Consider adding a migration flag and atomic operations:
useEffect(() => {
const handleMigration = async () => {
+ // Check if migration has already been attempted
+ const migrationKey = 'ai-provider-migration-v1';
+ const migrationCompleted = localStorage.getItem(migrationKey);
+ if (migrationCompleted) return;
+
// Skip if no store exists at all
if (!customLLMConnection.data && !customLLMEnabled.data) {
+ localStorage.setItem(migrationKey, 'skipped');
return;
}
// Check if migration needed (no providerSource exists)
if (!providerSourceQuery.data && customLLMConnection.data) {
console.log("Migrating existing user to new provider system...");
try {
// Copy existing custom* fields to others* fields
if (customLLMConnection.data.api_base) {
await setOthersApiBaseMutation.mutateAsync(customLLMConnection.data.api_base);
}
if (customLLMConnection.data.api_key) {
await setOthersApiKeyMutation.mutateAsync(customLLMConnection.data.api_key);
}
if (getCustomLLMModel.data) {
await setOthersModelMutation.mutateAsync(getCustomLLMModel.data);
}
// Set provider source to 'others'
await setProviderSourceMutation.mutateAsync("others");
+ localStorage.setItem(migrationKey, 'completed');
console.log("Migration completed successfully");
} catch (error) {
console.error("Migration failed:", error);
+ // Consider showing user notification about migration failure
}
}
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| const handleMigration = async () => { | |
| // Skip if no store exists at all | |
| if (!customLLMConnection.data && !customLLMEnabled.data) { | |
| return; | |
| } | |
| // Check if migration needed (no providerSource exists) | |
| if (!providerSourceQuery.data && customLLMConnection.data) { | |
| console.log("Migrating existing user to new provider system..."); | |
| try { | |
| // Copy existing custom* fields to others* fields | |
| if (customLLMConnection.data.api_base) { | |
| await setOthersApiBaseMutation.mutateAsync(customLLMConnection.data.api_base); | |
| } | |
| if (customLLMConnection.data.api_key) { | |
| await setOthersApiKeyMutation.mutateAsync(customLLMConnection.data.api_key); | |
| } | |
| if (getCustomLLMModel.data) { | |
| await setOthersModelMutation.mutateAsync(getCustomLLMModel.data); | |
| } | |
| // Set provider source to 'others' | |
| await setProviderSourceMutation.mutateAsync("others"); | |
| console.log("Migration completed successfully"); | |
| } catch (error) { | |
| console.error("Migration failed:", error); | |
| } | |
| } | |
| }; | |
| // Run migration when all queries have loaded | |
| if ( | |
| providerSourceQuery.data !== undefined && customLLMConnection.data !== undefined | |
| && getCustomLLMModel.data !== undefined | |
| ) { | |
| handleMigration(); | |
| } | |
| }, [providerSourceQuery.data, customLLMConnection.data, getCustomLLMModel.data]); | |
| useEffect(() => { | |
| const handleMigration = async () => { | |
| // Check if migration has already been attempted | |
| const migrationKey = 'ai-provider-migration-v1'; | |
| const migrationCompleted = localStorage.getItem(migrationKey); | |
| if (migrationCompleted) return; | |
| // Skip if no store exists at all | |
| if (!customLLMConnection.data && !customLLMEnabled.data) { | |
| localStorage.setItem(migrationKey, 'skipped'); | |
| return; | |
| } | |
| // Check if migration needed (no providerSource exists) | |
| if (!providerSourceQuery.data && customLLMConnection.data) { | |
| console.log("Migrating existing user to new provider system..."); | |
| try { | |
| // Copy existing custom* fields to others* fields | |
| if (customLLMConnection.data.api_base) { | |
| await setOthersApiBaseMutation.mutateAsync(customLLMConnection.data.api_base); | |
| } | |
| if (customLLMConnection.data.api_key) { | |
| await setOthersApiKeyMutation.mutateAsync(customLLMConnection.data.api_key); | |
| } | |
| if (getCustomLLMModel.data) { | |
| await setOthersModelMutation.mutateAsync(getCustomLLMModel.data); | |
| } | |
| // Set provider source to 'others' | |
| await setProviderSourceMutation.mutateAsync("others"); | |
| localStorage.setItem(migrationKey, 'completed'); | |
| console.log("Migration completed successfully"); | |
| } catch (error) { | |
| console.error("Migration failed:", error); | |
| // Consider showing user notification about migration failure | |
| } | |
| } | |
| }; | |
| // Run migration when all queries have loaded | |
| if ( | |
| providerSourceQuery.data !== undefined && | |
| customLLMConnection.data !== undefined && | |
| getCustomLLMModel.data !== undefined | |
| ) { | |
| handleMigration(); | |
| } | |
| }, [providerSourceQuery.data, customLLMConnection.data, getCustomLLMModel.data]); |
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/views/ai.tsx around lines 491 to 531,
the migration logic inside the useEffect can run multiple times due to query
refetches, risking inconsistent state on partial failures. To fix this,
introduce a persistent migration completion flag (e.g., in state or local
storage) to check before running migration and set it only after successful
completion. Also, ensure the migration steps are performed atomically or
rollback on failure to maintain consistency.
No description provided.