Skip to content

Custom providers (OpenAI, Gemini, OpenRouter) #1228

Merged
duckduckhero merged 14 commits intomainfrom
custom-providers
Jul 29, 2025
Merged

Custom providers (OpenAI, Gemini, OpenRouter) #1228
duckduckhero merged 14 commits intomainfrom
custom-providers

Conversation

@duckduckhero
Copy link
Contributor

No description provided.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 29, 2025

📝 Walkthrough

Walkthrough

This update introduces a modular, multi-provider AI configuration system in the desktop application. It adds new React components and TypeScript types for managing local and custom LLM/STT models, expands backend support for multiple AI providers (OpenAI, Gemini, OpenRouter, and custom endpoints), and extends the connector plugin's command, permission, and storage infrastructure to support granular configuration and access control for these providers.

Changes

Cohort / File(s) Change Summary
Desktop App – New/Refactored React Components
apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx, apps/desktop/src/components/settings/components/ai/llm-local-view.tsx, apps/desktop/src/components/settings/components/ai/stt-view.tsx
Added new components for LLM custom endpoint and local model configuration; refactored STTView to use prop-driven state and handlers.
Desktop App – Shared Types and Props
apps/desktop/src/components/settings/components/ai/shared.tsx
Added TypeScript interfaces/types for LLM/STT models, provider configs, and shared props for modular AI settings components.
Desktop App – AI Settings View
apps/desktop/src/components/settings/views/ai.tsx
Refactored main AI settings to a tabbed interface; integrated new components and provider-specific forms, validation, and migration logic.
Desktop App – Localization
apps/desktop/src/locales/en/messages.po, apps/desktop/src/locales/ko/messages.po
Added and updated localization strings for new AI provider features and refactored component paths.
Connector Plugin – Command Infrastructure
plugins/connector/build.rs, plugins/connector/js/bindings.gen.ts, plugins/connector/src/commands.rs, plugins/connector/src/lib.rs
Added getter/setter commands for API keys, models, API bases, and provider sources for OpenAI, Gemini, OpenRouter, and others; registered new commands in plugin.
Connector Plugin – Store Keys
plugins/connector/src/store.rs
Added new enum variants for storing provider source, API keys, API bases, and model names.
Connector Plugin – Permissions: Command TOMLs
plugins/connector/permissions/autogenerated/commands/*
Added TOML permission files for all new get/set commands for provider keys, models, API bases, and sources.
Connector Plugin – Permissions: Reference and Defaults
plugins/connector/permissions/autogenerated/reference.md, plugins/connector/permissions/default.toml, plugins/connector/permissions/schemas/schema.json
Updated documentation, default permissions, and schema to include new permission kinds for all new provider commands.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant UI (LocalAI + Child Components)
    participant Backend (Connector Plugin)

    User->>UI (LocalAI): Selects "LLM - Custom" tab
    UI->>UI: Renders LLMCustomView with provider accordions
    User->>UI: Enters API key/model for a provider
    UI->>Backend: Calls setXxxApiKey / setXxxModel command
    Backend->>Backend: Stores value under StoreKey
    Backend-->>UI: Confirms storage
    UI->>Backend: Calls setProviderSource with provider name
    Backend-->>UI: Confirms provider source set
    User->>UI: Switches to "LLM - Local" or "Transcription"
    UI->>Backend: Fetches local model info/status
    Backend-->>UI: Returns model info
    User->>UI: Initiates model download
    UI->>Backend: Triggers download handler
    Backend-->>UI: Updates download status
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40–60 minutes

Possibly related PRs

  • Add hypr-llm as downloadable option #1108: Adds UI components RatingDisplay and LanguageDisplay to shared.tsx and refactors stt-view.tsx to import these shared components; related as both modify or add UI components within the AI settings components and share the shared.tsx file.

Suggested reviewers

  • yujonglee

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bee8ef8 and 9bf56c4.

📒 Files selected for processing (1)
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}

⚙️ CodeRabbit Configuration File

**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".

Files:

  • apps/desktop/src/components/settings/components/ai/stt-view.tsx
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: ci (macos, macos-latest)
  • GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (11)
apps/desktop/src/components/settings/components/ai/stt-view.tsx (11)

1-11: LGTM!

All imports are properly used throughout the component. The refactoring to use controlled component props is well-reflected in the import changes.


13-101: LGTM!

The STT model metadata is well-structured with consistent properties across all models. The Record typing provides good type safety.


103-106: LGTM!

Clean interface extension that properly adds the modal control props needed for the controlled component pattern.


108-119: LGTM!

The component function properly destructures all controlled props and the comment provides helpful context about the backend interaction.


120-123: LGTM!

Standard React Query setup for fetching the current STT model from the backend.


125-129: LGTM!

The effect properly syncs the backend data with the controlled state, with correct dependencies.


132-155: LGTM!

Efficient implementation that batches all model download status checks and uses appropriate polling interval. The Promise.all approach and return type are well-designed.


157-166: LGTM!

The effect correctly updates model download status using proper functional state updates while preserving other model properties.


168-184: LGTM!

Clean header section with proper tooltip integration for the WER performance modal.


186-198: LGTM!

Well-structured model cards with appropriate conditional styling, proper event handling with stopPropagation, and clean rating display using colored dots.

Also applies to: 206-315


317-321: LGTM!

Clean modal integration following the controlled component pattern.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch custom-providers

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (1)
apps/desktop/src/components/settings/components/ai/stt-view.tsx (1)

283-283: Make the file manager text platform-aware.

"Show in Finder" is macOS-specific terminology.

🧹 Nitpick comments (5)
plugins/connector/permissions/autogenerated/commands/get_others_model.toml (1)

5-13: Identifier wording minor nit

“others” is a bit ambiguous compared to provider-specific identifiers used elsewhere. If “Others” maps to an actual provider enum variant, disregard; otherwise consider renaming for clarity (e.g., get_custom_model).

apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)

106-106: Make the file manager text platform-aware.

"Show in Finder" is macOS-specific. Consider making this text platform-aware for better cross-platform UX.

You could use a helper function to get the appropriate text:

const getFileManagerText = () => {
  if (navigator.userAgent.includes('Mac')) return 'Show in Finder';
  if (navigator.userAgent.includes('Win')) return 'Show in Explorer';
  return 'Show in File Manager';
};

Then use it in the button:

-Show in Finder
+{getFileManagerText()}
apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx (2)

61-61: Consider externalizing API key validation patterns.

The hardcoded API key prefixes might become outdated and are duplicated across providers.

Create a configuration object for validation patterns:

const providerValidation = {
  openai: { keyPrefix: 'sk-', keyPattern: /^sk-[a-zA-Z0-9]+$/ },
  gemini: { keyPrefix: 'AIza', keyPattern: /^AIza[a-zA-Z0-9]+$/ },
  openrouter: { keyPrefix: 'sk-', keyPattern: /^sk-[a-zA-Z0-9]+$/ }
};

This makes it easier to update patterns and add more sophisticated validation if needed.

Also applies to: 76-76, 91-91


18-38: Consider making model lists configurable or dynamic.

Hardcoded model lists will require code changes when providers add or remove models.

Consider:

  1. Fetching available models from each provider's API
  2. Storing model lists in a configuration file
  3. Allowing users to input custom model names

This would make the component more maintainable and future-proof.

apps/desktop/src/components/settings/views/ai.tsx (1)

76-88: Consider making URL validation less restrictive

The current validation logic enforces /v1 suffix for URLs containing "openai" or "openrouter", which might be too restrictive for custom endpoints. Some providers might use different API versioning schemes or paths.

Consider making this validation optional or configurable:

-  api_base: z.string().url({ message: "Please enter a valid URL" }).min(1, { message: "URL is required" }).refine(
-    (value) => {
-      const v1Needed = ["openai", "openrouter"].some((host) => value.includes(host));
-      if (v1Needed && !value.endsWith("/v1")) {
-        return false;
-      }
-      return true;
-    },
-    { message: "Should end with '/v1'" },
-  ).refine(
+  api_base: z.string().url({ message: "Please enter a valid URL" }).min(1, { message: "URL is required" }).refine(
     (value) => !value.includes("chat/completions"),
     { message: "`/chat/completions` will be appended automatically" },
   ),
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between adef3b7 and bee8ef8.

📒 Files selected for processing (34)
  • apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1 hunks)
  • apps/desktop/src/components/settings/components/ai/shared.tsx (2 hunks)
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx (2 hunks)
  • apps/desktop/src/components/settings/views/ai.tsx (7 hunks)
  • apps/desktop/src/locales/en/messages.po (20 hunks)
  • apps/desktop/src/locales/ko/messages.po (20 hunks)
  • plugins/connector/build.rs (1 hunks)
  • plugins/connector/js/bindings.gen.ts (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/get_gemini_api_key.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/get_gemini_model.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/get_openai_model.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/get_openrouter_api_key.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/get_openrouter_model.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/get_others_api_base.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/get_others_api_key.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/get_others_model.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/get_provider_source.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/set_gemini_api_key.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/set_gemini_model.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/set_openai_api_key.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/set_openai_model.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/set_openrouter_api_key.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/set_openrouter_model.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/set_others_api_base.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/set_others_api_key.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/set_others_model.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/commands/set_provider_source.toml (1 hunks)
  • plugins/connector/permissions/autogenerated/reference.md (4 hunks)
  • plugins/connector/permissions/default.toml (1 hunks)
  • plugins/connector/permissions/schemas/schema.json (3 hunks)
  • plugins/connector/src/commands.rs (1 hunks)
  • plugins/connector/src/lib.rs (1 hunks)
  • plugins/connector/src/store.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}

⚙️ CodeRabbit Configuration File

**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".

Files:

  • plugins/connector/src/lib.rs
  • plugins/connector/src/store.rs
  • apps/desktop/src/components/settings/components/ai/llm-local-view.tsx
  • plugins/connector/build.rs
  • apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx
  • plugins/connector/js/bindings.gen.ts
  • apps/desktop/src/components/settings/views/ai.tsx
  • apps/desktop/src/components/settings/components/ai/stt-view.tsx
  • apps/desktop/src/components/settings/components/ai/shared.tsx
  • plugins/connector/src/commands.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: ci (macos, macos-latest)
  • GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (39)
plugins/connector/permissions/autogenerated/commands/set_others_api_base.toml (2)

1-4: Header & schema path look correct – no concerns
Autogenerated banner and relative path resolve cleanly (../../schemas/schema.json).


5-13: Command Implementation & Registration Confirmed

The set_others_api_base function is defined in plugins/connector/src/commands.rs (line 221) and registered in plugins/connector/src/lib.rs (line 37). No further changes are required.

plugins/connector/permissions/autogenerated/commands/get_gemini_model.toml (2)

1-4: Header & schema path look correct – no concerns


5-13: Command get_gemini_model implementation and binding verified

  • Rust implementation found at plugins/connector/src/commands.rs:289
  • JS binding exists at plugins/connector/js/bindings.gen.ts:83

No changes required.

plugins/connector/permissions/autogenerated/commands/set_others_model.toml (2)

1-4: Header & schema path look correct – no concerns


5-13: set_others_model Permission Wiring Confirmed
The set_others_model command is implemented and registered, so the TOML permission entries will be active:

  • Implementation: plugins/connector/src/commands.rs:244
  • Registration: plugins/connector/src/lib.rs:39
plugins/connector/permissions/autogenerated/commands/get_provider_source.toml (2)

1-4: Header & schema path look correct – no concerns


5-13: get_provider_source command verification passed
The get_provider_source function is implemented and exposed correctly:

  • Rust implementation found at plugins/connector/src/commands.rs:163
  • JS binding confirmed in plugins/connector/js/bindings.gen.ts:53

No further action needed.

plugins/connector/permissions/autogenerated/commands/get_openai_model.toml (2)

1-4: Header & schema path look correct – no concerns


5-13: get_openai_model is implemented & exposed
Confirmed that the command exists in Rust and is wired up in the JS bindings, so the TOML permissions are in sync.
• plugins/connector/src/commands.rs:266 – pub async fn get_openai_model…
• plugins/connector/js/bindings.gen.ts:77 – TAURI_INVOKE("plugin:connector|get_openai_model")

plugins/connector/permissions/autogenerated/commands/set_openai_model.toml (1)

1-14: Permissions entry looks correct – LGTM
Schema path, identifiers, and allow/deny lists follow the established pattern. No issues spotted.

plugins/connector/permissions/autogenerated/commands/set_openrouter_model.toml (1)

1-14: Permissions entry looks correct – LGTM
Consistent naming, schema reference, and command lists. Good to merge.

plugins/connector/permissions/autogenerated/commands/set_provider_source.toml (1)

1-14: Permissions entry looks correct – LGTM
All fields adhere to the conventions used across the permission set.

plugins/connector/permissions/autogenerated/commands/get_others_api_base.toml (1)

1-14: Permissions entry looks correct – LGTM
Unique identifiers and command mapping are sound.

plugins/connector/permissions/autogenerated/commands/set_gemini_api_key.toml (1)

1-14: Permissions entry looks correct – LGTM
Matches the schema path and naming conventions; no further action needed.

plugins/connector/permissions/autogenerated/commands/get_gemini_api_key.toml (1)

5-13: Pattern consistent – no blocking issues

The allow/deny dual-entry pattern and identifier naming are consistent with existing autogenerated permission files. Schema reference and descriptions look correct.

plugins/connector/permissions/autogenerated/commands/set_gemini_model.toml (1)

5-13: Matches established permission-file convention

File follows the standard structure (schema link, paired allow/deny sections, clear identifiers). No action required.

plugins/connector/permissions/autogenerated/commands/get_openrouter_model.toml (1)

5-13: Structure and identifiers look good

Conforms to the autogenerated pattern; identifiers and command names are accurate.

plugins/connector/permissions/autogenerated/commands/get_openrouter_api_key.toml (1)

5-13: No issues found

Schema path, allow/deny entries, and descriptions align with the rest of the permissions set.

plugins/connector/permissions/autogenerated/commands/set_openai_api_key.toml (1)

1-13: File correctly generated – no issues detected

Schema reference, identifiers, descriptions and allow/deny arrays follow the established pattern used across the permission set.
Nothing to change.

plugins/connector/permissions/autogenerated/commands/set_others_api_key.toml (1)

1-13: Consistent permission stub

Structure and naming match the existing convention; looks good.

plugins/connector/permissions/autogenerated/commands/set_openrouter_api_key.toml (1)

1-13: Permission file aligns with convention

The allow/deny entries are in place and correctly named.

plugins/connector/permissions/autogenerated/commands/get_others_api_key.toml (1)

1-13: LGTM

Correct schema reference and symmetric allow/deny blocks present.

plugins/connector/src/store.rs (1)

14-20: Verify downstream handling of new StoreKey variants

Seven variants were appended. Ensure:

  1. Every new key is registered in all persistence helpers and JS bindings (bindings.gen.ts) so specta can export them.
  2. Migration logic (if any) tolerates the new discriminants when reading older persisted data; otherwise, add version-gated fallback.

No code issues here, just a reminder to confirm the end-to-end flow.

plugins/connector/src/lib.rs (1)

29-47: LGTM! Well-structured command registration for multi-provider AI support.

The new command registrations follow consistent naming conventions and properly support the multi-provider AI configuration feature. The getter/setter pattern for API keys, models, and provider sources is well-implemented across OpenAI, Gemini, OpenRouter, and custom endpoints.

plugins/connector/permissions/default.toml (1)

15-33: LGTM! Comprehensive permission coverage for new AI provider commands.

The new permissions properly correspond to all the commands added in lib.rs and follow the established naming convention. The granular allow-get- and allow-set- permissions provide appropriate access control for the multi-provider AI configuration features.

plugins/connector/build.rs (1)

13-31: LGTM! Complete and accurate command list for build configuration.

The COMMANDS array properly includes all new AI provider commands with names that exactly match those registered in lib.rs. This ensures proper plugin build and binding generation for the multi-provider AI features.

apps/desktop/src/locales/en/messages.po (3)

319-321: LGTM! Clear and informative AI provider messages.

The new localization messages provide excellent user guidance for different AI providers (OpenAI, Gemini, OpenRouter, Others), with clear descriptions of what each option offers. The messaging is consistent and user-friendly.

Also applies to: 794-796, 1091-1097, 1107-1109, 1441-1447


623-625: LGTM! Appropriate messages for new UI structure.

The localization messages properly support the new tabbed interface with clear section names like "Custom Endpoints", "LLM - Local", "LLM - Custom", and "Transcription". These help users navigate the reorganized AI configuration interface.

Also applies to: 903-909, 931-933, 1401-1403


385-395: LGTM! Clear form field guidance.

The localization messages for form fields provide helpful guidance for users configuring API keys, base URLs, and model selections. The instructional text clearly explains what information users need to provide for each field.

Also applies to: 713-719, 961-969, 1232-1234

apps/desktop/src/locales/ko/messages.po (1)

319-321: LGTM! Proper structural consistency for Korean localization.

The Korean localization file maintains excellent structural consistency with the English version, including all new message IDs for AI providers, UI sections, and form fields. While translations are not yet provided (empty msgstr), the structure is properly prepared for future translation work.

Also applies to: 623-625, 794-796, 903-909, 931-933, 1091-1097, 1107-1109, 1232-1234, 1401-1403, 1441-1447

apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)

54-62: Add error handling for async operations in onClick handler.

Multiple async operations are performed without error handling, which could lead to inconsistent state if any operation fails.

Wrap the operations in a try-catch block or handle errors individually:

 onClick={() => {
   if (model.available && model.downloaded) {
-    setSelectedLLMModel(model.key);
-    localLlmCommands.setCurrentModel(model.key as SupportedModel);
-    // CRITICAL: Disable custom LLM when local model is selected
-    setCustomLLMEnabledMutation.mutate(false);
-    localLlmCommands.restartServer();
+    const handleModelSelection = async () => {
+      try {
+        setSelectedLLMModel(model.key);
+        await localLlmCommands.setCurrentModel(model.key as SupportedModel);
+        // CRITICAL: Disable custom LLM when local model is selected
+        setCustomLLMEnabledMutation.mutate(false);
+        await localLlmCommands.restartServer();
+      } catch (error) {
+        console.error("Failed to select model:", error);
+        // Reset UI state on error
+        setSelectedLLMModel(currentLLMModel.data || "");
+      }
+    };
+    handleModelSelection();
   }
 }}

Likely an incorrect or invalid review comment.

apps/desktop/src/components/settings/components/ai/stt-view.tsx (2)

199-205: Add error handling for model selection operations.

The async operations in the onClick handler lack error handling.

 onClick={() => {
   if (model.downloaded) {
-    setSelectedSTTModel(model.key);
-    localSttCommands.setCurrentModel(model.key as any);
-    localSttCommands.restartServer();
+    const handleModelSelection = async () => {
+      try {
+        setSelectedSTTModel(model.key);
+        await localSttCommands.setCurrentModel(model.key as SupportedModel);
+        await localSttCommands.restartServer();
+      } catch (error) {
+        console.error("Failed to select STT model:", error);
+        // Reset selection on error
+        setSelectedSTTModel(currentSTTModel.data || "");
+      }
+    };
+    handleModelSelection();
   }
 }}

Likely an incorrect or invalid review comment.


132-155: Improve the download status query implementation.

The implementation has hardcoded model names and lacks error handling.

Use the model metadata keys dynamically and add error handling:

 const sttModelDownloadStatus = useQuery({
   queryKey: ["stt-model-download-status"],
   queryFn: async () => {
-    const statusChecks = await Promise.all([
-      localSttCommands.isModelDownloaded("QuantizedTiny"),
-      localSttCommands.isModelDownloaded("QuantizedTinyEn"),
-      localSttCommands.isModelDownloaded("QuantizedBase"),
-      localSttCommands.isModelDownloaded("QuantizedBaseEn"),
-      localSttCommands.isModelDownloaded("QuantizedSmall"),
-      localSttCommands.isModelDownloaded("QuantizedSmallEn"),
-      localSttCommands.isModelDownloaded("QuantizedLargeTurbo"),
-    ]);
-    return {
-      "QuantizedTiny": statusChecks[0],
-      "QuantizedTinyEn": statusChecks[1],
-      "QuantizedBase": statusChecks[2],
-      "QuantizedBaseEn": statusChecks[3],
-      "QuantizedSmall": statusChecks[4],
-      "QuantizedSmallEn": statusChecks[5],
-      "QuantizedLargeTurbo": statusChecks[6],
-    } as Record<string, boolean>;
+    const models = Object.keys(sttModelMetadata) as SupportedModel[];
+    const statusChecks = await Promise.allSettled(
+      models.map(model => localSttCommands.isModelDownloaded(model))
+    );
+    
+    return models.reduce((acc, model, index) => {
+      const result = statusChecks[index];
+      acc[model] = result.status === 'fulfilled' ? result.value : false;
+      return acc;
+    }, {} as Record<string, boolean>);
   },
   refetchInterval: 3000,
+  onError: (error) => {
+    console.error("Failed to check model download status:", error);
+  }
 });

Likely an incorrect or invalid review comment.

plugins/connector/permissions/schemas/schema.json (1)

333-656: LGTM! Well-structured permission definitions.

The new permissions follow a consistent naming pattern and each allow permission has its corresponding deny permission. The structure aligns well with the multi-provider architecture.

plugins/connector/permissions/autogenerated/reference.md (1)

17-36: LGTM! Documentation properly reflects the new permissions.

The auto-generated documentation correctly lists all the new permissions for multi-provider support with consistent formatting and clear descriptions.

Also applies to: 128-176, 258-279, 284-331, 336-383, 388-435, 567-823

plugins/connector/js/bindings.gen.ts (1)

1-175: Auto-generated file looks correct

The generated bindings follow the established patterns and are properly typed. No issues found.

apps/desktop/src/components/settings/components/ai/shared.tsx (1)

44-141: Well-structured type definitions

The new interfaces and types provide excellent type safety and clear contracts for the AI configuration components. Good use of TypeScript features and proper separation of concerns.

plugins/connector/src/commands.rs (1)

104-320: Consistent and well-implemented storage commands

All new commands follow the established patterns correctly with proper error handling and consistent behavior. The use of unwrap_or_default() for getters ensures safe handling of missing values.

Comment on lines +58 to +122
useEffect(() => {
const subscription = openaiForm.watch((values) => {
// Manual validation: OpenAI key starts with "sk-" and model is selected
if (values.api_key && values.api_key.startsWith("sk-") && values.model) {
configureCustomEndpoint({
provider: "openai",
api_base: "", // Will be auto-set
api_key: values.api_key,
model: values.model,
});
}
});
return () => subscription.unsubscribe();
}, [openaiForm, configureCustomEndpoint]);

useEffect(() => {
const subscription = geminiForm.watch((values) => {
// Manual validation: Gemini key starts with "AIza" and model is selected
if (values.api_key && values.api_key.startsWith("AIza") && values.model) {
configureCustomEndpoint({
provider: "gemini",
api_base: "", // Will be auto-set
api_key: values.api_key,
model: values.model,
});
}
});
return () => subscription.unsubscribe();
}, [geminiForm, configureCustomEndpoint]);

useEffect(() => {
const subscription = openrouterForm.watch((values) => {
// Manual validation: OpenRouter key starts with "sk-" and model is selected
if (values.api_key && values.api_key.startsWith("sk-") && values.model) {
configureCustomEndpoint({
provider: "openrouter",
api_base: "", // Will be auto-set
api_key: values.api_key,
model: values.model,
});
}
});
return () => subscription.unsubscribe();
}, [openrouterForm, configureCustomEndpoint]);

useEffect(() => {
const subscription = customForm.watch((values) => {
// Manual validation: URL and model are present
if (values.api_base && values.model) {
try {
// Basic URL validation
new URL(values.api_base);
configureCustomEndpoint({
provider: "others",
api_base: values.api_base,
api_key: values.api_key,
model: values.model,
});
} catch {
// invalid URL
}
}
});
return () => subscription.unsubscribe();
}, [customForm, configureCustomEndpoint]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider debouncing form submissions to improve performance.

The form watchers trigger on every change, potentially causing performance issues with frequent validation and configuration calls.

Consider using debouncing:

import { useMemo } from 'react';
import { debounce } from 'lodash'; // or implement your own

// Inside component:
const debouncedConfigure = useMemo(
  () => debounce(configureCustomEndpoint, 500),
  [configureCustomEndpoint]
);

// Then in useEffect:
useEffect(() => {
  const subscription = openaiForm.watch((values) => {
    if (values.api_key && values.api_key.startsWith("sk-") && values.model) {
      debouncedConfigure({
        provider: "openai",
        api_base: "",
        api_key: values.api_key,
        model: values.model,
      });
    }
  });
  return () => subscription.unsubscribe();
}, [openaiForm, debouncedConfigure]);
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx around
lines 58 to 122, the form watchers call configureCustomEndpoint on every change,
which may cause performance issues. To fix this, import debounce from lodash and
create a debounced version of configureCustomEndpoint using useMemo with a delay
(e.g., 500ms). Replace direct calls to configureCustomEndpoint inside each form
watcher with calls to the debounced function, and update the useEffect
dependencies accordingly to use the debounced function instead of the original.

Comment on lines +116 to +118
} catch {
// invalid URL
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Provide user feedback for invalid URLs.

The empty catch block silently ignores invalid URLs without informing the user.

Consider providing user feedback:

 } catch {
   // invalid URL
+  console.warn("Invalid URL provided:", values.api_base);
+  // Optionally, you could set a form error here
+  // customForm.setError('api_base', { message: 'Please enter a valid URL' });
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch {
// invalid URL
}
} catch {
// invalid URL
console.warn("Invalid URL provided:", values.api_base);
// Optionally, you could set a form error here
// customForm.setError('api_base', { message: 'Please enter a valid URL' });
}
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx around
lines 116 to 118, the catch block for invalid URLs is empty and does not provide
any user feedback. Modify the catch block to display an error message or
notification to the user indicating that the URL entered is invalid, ensuring
the user is informed of the issue instead of silently ignoring it.

<div className="flex items-center gap-2">
<svg
fill="currentColor"
fill-rule="evenodd"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use camelCase for SVG attributes in React.

React requires camelCase for SVG attributes.

-fill-rule="evenodd"
+fillRule="evenodd"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fill-rule="evenodd"
fillRule="evenodd"
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/llm-custom-view.tsx at
line 365, the SVG attribute "fill-rule" should be changed to camelCase as
"fillRule" to comply with React's requirement for SVG attributes.

Comment on lines +22 to +25
const currentLLMModel = useQuery({
queryKey: ["current-llm-model"],
queryFn: () => localLlmCommands.getCurrentModel(),
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for the query.

The query lacks error handling. If the backend call fails, users won't see any error indication.

Consider handling the error state:

 const currentLLMModel = useQuery({
   queryKey: ["current-llm-model"],
   queryFn: () => localLlmCommands.getCurrentModel(),
+  retry: 3,
+  onError: (error) => {
+    console.error("Failed to fetch current LLM model:", error);
+  }
 });

Then in the UI, you could show an error state:

if (currentLLMModel.isError) {
  return <div className="text-red-500">Failed to load current model</div>;
}
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/llm-local-view.tsx around
lines 22 to 25, the useQuery call for fetching the current LLM model lacks error
handling. To fix this, add a check for the isError property on the query result
and render an error message in the UI when an error occurs, such as returning a
div with a clear error message to inform users that loading the current model
failed.

onClick={() => {
if (model.downloaded) {
setSelectedSTTModel(model.key);
localSttCommands.setCurrentModel(model.key as any);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Avoid using 'as any' type cast.

The as any cast bypasses TypeScript's type safety. Use the proper type instead.

-localSttCommands.setCurrentModel(model.key as any);
+localSttCommands.setCurrentModel(model.key as SupportedModel);

If model.key might not always be a valid SupportedModel, add type validation:

if (Object.keys(sttModelMetadata).includes(model.key)) {
  localSttCommands.setCurrentModel(model.key as SupportedModel);
}
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/stt-view.tsx at line 202,
avoid using 'as any' to cast model.key. Instead, ensure model.key is a valid
SupportedModel by checking if it exists in sttModelMetadata keys before calling
localSttCommands.setCurrentModel. Use a type guard or conditional to confirm the
type and then cast to SupportedModel safely.

Comment on lines +491 to +531
useEffect(() => {
const handleMigration = async () => {
// Skip if no store exists at all
if (!customLLMConnection.data && !customLLMEnabled.data) {
return;
}

// Check if migration needed (no providerSource exists)
if (!providerSourceQuery.data && customLLMConnection.data) {
console.log("Migrating existing user to new provider system...");

try {
// Copy existing custom* fields to others* fields
if (customLLMConnection.data.api_base) {
await setOthersApiBaseMutation.mutateAsync(customLLMConnection.data.api_base);
}
if (customLLMConnection.data.api_key) {
await setOthersApiKeyMutation.mutateAsync(customLLMConnection.data.api_key);
}
if (getCustomLLMModel.data) {
await setOthersModelMutation.mutateAsync(getCustomLLMModel.data);
}

// Set provider source to 'others'
await setProviderSourceMutation.mutateAsync("others");

console.log("Migration completed successfully");
} catch (error) {
console.error("Migration failed:", error);
}
}
};

// Run migration when all queries have loaded
if (
providerSourceQuery.data !== undefined && customLLMConnection.data !== undefined
&& getCustomLLMModel.data !== undefined
) {
handleMigration();
}
}, [providerSourceQuery.data, customLLMConnection.data, getCustomLLMModel.data]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add safeguards to prevent migration from running multiple times

The migration logic could potentially run multiple times if the queries refetch, and there's no flag to track if migration has already been completed. Additionally, partial failures could leave the system in an inconsistent state.

Consider adding a migration flag and atomic operations:

   useEffect(() => {
     const handleMigration = async () => {
+      // Check if migration has already been attempted
+      const migrationKey = 'ai-provider-migration-v1';
+      const migrationCompleted = localStorage.getItem(migrationKey);
+      if (migrationCompleted) return;
+
       // Skip if no store exists at all
       if (!customLLMConnection.data && !customLLMEnabled.data) {
+        localStorage.setItem(migrationKey, 'skipped');
         return;
       }

       // Check if migration needed (no providerSource exists)
       if (!providerSourceQuery.data && customLLMConnection.data) {
         console.log("Migrating existing user to new provider system...");

         try {
           // Copy existing custom* fields to others* fields
           if (customLLMConnection.data.api_base) {
             await setOthersApiBaseMutation.mutateAsync(customLLMConnection.data.api_base);
           }
           if (customLLMConnection.data.api_key) {
             await setOthersApiKeyMutation.mutateAsync(customLLMConnection.data.api_key);
           }
           if (getCustomLLMModel.data) {
             await setOthersModelMutation.mutateAsync(getCustomLLMModel.data);
           }

           // Set provider source to 'others'
           await setProviderSourceMutation.mutateAsync("others");
+          localStorage.setItem(migrationKey, 'completed');

           console.log("Migration completed successfully");
         } catch (error) {
           console.error("Migration failed:", error);
+          // Consider showing user notification about migration failure
         }
       }
     };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
useEffect(() => {
const handleMigration = async () => {
// Skip if no store exists at all
if (!customLLMConnection.data && !customLLMEnabled.data) {
return;
}
// Check if migration needed (no providerSource exists)
if (!providerSourceQuery.data && customLLMConnection.data) {
console.log("Migrating existing user to new provider system...");
try {
// Copy existing custom* fields to others* fields
if (customLLMConnection.data.api_base) {
await setOthersApiBaseMutation.mutateAsync(customLLMConnection.data.api_base);
}
if (customLLMConnection.data.api_key) {
await setOthersApiKeyMutation.mutateAsync(customLLMConnection.data.api_key);
}
if (getCustomLLMModel.data) {
await setOthersModelMutation.mutateAsync(getCustomLLMModel.data);
}
// Set provider source to 'others'
await setProviderSourceMutation.mutateAsync("others");
console.log("Migration completed successfully");
} catch (error) {
console.error("Migration failed:", error);
}
}
};
// Run migration when all queries have loaded
if (
providerSourceQuery.data !== undefined && customLLMConnection.data !== undefined
&& getCustomLLMModel.data !== undefined
) {
handleMigration();
}
}, [providerSourceQuery.data, customLLMConnection.data, getCustomLLMModel.data]);
useEffect(() => {
const handleMigration = async () => {
// Check if migration has already been attempted
const migrationKey = 'ai-provider-migration-v1';
const migrationCompleted = localStorage.getItem(migrationKey);
if (migrationCompleted) return;
// Skip if no store exists at all
if (!customLLMConnection.data && !customLLMEnabled.data) {
localStorage.setItem(migrationKey, 'skipped');
return;
}
// Check if migration needed (no providerSource exists)
if (!providerSourceQuery.data && customLLMConnection.data) {
console.log("Migrating existing user to new provider system...");
try {
// Copy existing custom* fields to others* fields
if (customLLMConnection.data.api_base) {
await setOthersApiBaseMutation.mutateAsync(customLLMConnection.data.api_base);
}
if (customLLMConnection.data.api_key) {
await setOthersApiKeyMutation.mutateAsync(customLLMConnection.data.api_key);
}
if (getCustomLLMModel.data) {
await setOthersModelMutation.mutateAsync(getCustomLLMModel.data);
}
// Set provider source to 'others'
await setProviderSourceMutation.mutateAsync("others");
localStorage.setItem(migrationKey, 'completed');
console.log("Migration completed successfully");
} catch (error) {
console.error("Migration failed:", error);
// Consider showing user notification about migration failure
}
}
};
// Run migration when all queries have loaded
if (
providerSourceQuery.data !== undefined &&
customLLMConnection.data !== undefined &&
getCustomLLMModel.data !== undefined
) {
handleMigration();
}
}, [providerSourceQuery.data, customLLMConnection.data, getCustomLLMModel.data]);
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/views/ai.tsx around lines 491 to 531,
the migration logic inside the useEffect can run multiple times due to query
refetches, risking inconsistent state on partial failures. To fix this,
introduce a persistent migration completion flag (e.g., in state or local
storage) to check before running migration and set it only after successful
completion. Also, ensure the migration steps are performed atomically or
rollback on failure to maintain consistency.

@duckduckhero duckduckhero merged commit a3e9e02 into main Jul 29, 2025
9 of 10 checks passed
This was referenced Aug 13, 2025
@ComputelessComputer ComputelessComputer deleted the custom-providers branch December 14, 2025 15:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant