Conversation
📝 WalkthroughWalkthroughMultiple dependency version bumps across packages. The local-llm plugin gains ModelSelection/CustomModelInfo types, new Tauri commands and store key, and macOS custom-model enumeration. GGUF crate centralizes metadata parsing and exposes chat_format and model_name; llama uses chat_format. Desktop UI adapts to ModelSelection and provider-aware download checks. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant UI as Desktop UI
participant PL as local-llm Plugin (Tauri)
participant ST as Store/Ext
participant FS as Filesystem
participant SRV as Local Server
participant GG as GGUF crate
participant LM as Llama crate
Note over UI,PL: Model selection & custom model listing
UI->>PL: get_current_model_selection()
PL->>ST: read StoreKey::ModelSelection
ST-->>PL: ModelSelection
PL-->>UI: ModelSelection
UI->>PL: list_custom_models()
PL->>ST: enumerate LMStudio dir (macOS)
ST->>FS: scan gguf/custom/*.gguf
FS-->>ST: paths
ST->>GG: model_name(path)
GG-->>ST: Option<String>
ST-->>PL: [CustomModelInfo]
PL-->>UI: [CustomModelInfo]
UI->>PL: set_current_model_selection(selection)
PL->>ST: store selection (and legacy Model if Predefined)
ST-->>PL: Ok
PL-->>UI: Ok
Note over UI,SRV: Server restart after selection
UI->>SRV: restart
SRV->>PL: get_current_model_selection()
PL->>ST: ModelSelection -> file_path(models_dir)
ST->>FS: check exists
FS-->>ST: Ok/Err
alt exists
SRV->>LM: init(model_path)
LM->>GG: chat_format(model_path)
GG-->>LM: Option<Template>
LM-->>SRV: ready
else missing
SRV-->>UI: ModelNotDownloaded
end
sequenceDiagram
autonumber
participant Reader as read_gguf_metadata
participant File as GGUF file
participant Caller as chat_format / model_name
Caller->>Reader: read_gguf_metadata(path)
Reader->>File: open + parse headers + iterate metadata
File-->>Reader: metadata entries
alt tokenizer.chat_template present (string)
Reader-->>Caller: tokenizer.chat_template (Template)
else architecture known
Reader-->>Caller: inferred ChatTemplate
else
Reader-->>Caller: None
end
Caller-->>Caller: model_name reads general.name via same reader
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/gguf/src/lib.rs (1)
48-81: Add Phi2 and Qwen support and provide a safe fallback
- In crates/gguf/src/template.rs, add enum variants for
Phi2andQwen(with#[strum(serialize = "phi2")],#[strum(serialize = "qwen")]and any needed aliases like"qwen2","qwen3").- In crates/gguf/src/lib.rs, extend the match to map
"phi2"→Ok(Some(ChatTemplate::TemplateKey(LlamaCppRegistry::Phi2)))"qwen" | "qwen2" | "qwen3"→Ok(Some(ChatTemplate::TemplateKey(LlamaCppRegistry::Qwen)))- Replace the catch-all
_ => Ok(None)with
_ => Ok(Some(ChatTemplate::TemplateValue(architecture)))
so that unknown architectures don’t bubble into the upstream.unwrap()and panic.
🧹 Nitpick comments (12)
packages/obsidian/package.json (1)
20-20: Consider making react-query a peerDependency to avoid duplicate instances across consumersSince this package exports generated TanStack hooks, treating
@tanstack/react-queryas a peer reduces bundle dupes and version skew across apps.Apply:
"dependencies": { "@hey-api/client-fetch": "^0.8.4", - "@tanstack/react-query": "^5.87.1" + }, + "peerDependencies": { + "@tanstack/react-query": "^5.87.1" + }, + "devDependencies": { + "@hey-api/openapi-ts": "^0.78.3", + "@tanstack/react-query": "^5.87.1" }admin/server/package.json (1)
10-10: Align @types/node versions to 22.x across the monorepo
admin/server/package.json and apps/pro/package.json currently pin ^20.19.13—bump both to ^22.18.1 to match apps/admin, apps/desktop, packages/tiptap and packages/ui.plugins/local-llm/src/error.rs (3)
21-22: Catch‑all error variant added; consider a more future‑proof naming
Other(String)works, butUnknown(String)reads clearer and avoids implying there’s a known set beyond this. Optional.- #[error("Other error: {0}")] - Other(String), + #[error("Unknown error: {0}")] + Unknown(String),
19-20: Normalize user‑facing error casingMatch sentence casing used elsewhere.
- #[error("server already running")] - ServerAlreadyRunning, + #[error("Server already running")] + ServerAlreadyRunning,
5-6: Consider marking the error enum non‑exhaustivePrevents downstream exhaustive matching breakage when adding new variants later.
-#[derive(Debug, thiserror::Error)] +#[derive(Debug, thiserror::Error)] +#[non_exhaustive] pub enum Error {crates/gguf/src/lib.rs (3)
26-45: Support legacy chat template keys as well.Some GGUFs still use
chat_templateorllama.chat_template. Consider accepting these to broaden compatibility.- if key == "tokenizer.chat_template" { + if key == "tokenizer.chat_template" || key == "chat_template" || key == "llama.chat_template" {
104-156: Make the callback contract explicit.The callback must fully consume or skip the value; otherwise the reader desynchronizes. Add a brief comment to document this invariant.
+/// Scans GGUF metadata and invokes `callback` with the reader positioned at the start of the value. +/// Contract: the callback must fully consume (read) or explicitly skip the value; otherwise parsing will desynchronize. fn read_gguf_metadata<F, R>(path: &Path, mut callback: F) -> Result<Option<R>>
21-23: No internalgguf_chat_formatreferences; handle public-API break
Repo search (rg -nP '\bgguf_chat_format\s*\(' -S) found no remaining calls togguf_chat_format. This rename still breaks downstream users—either add a deprecated shim:#[deprecated(note = "use `chat_format` instead")] pub fn gguf_chat_format(&self) -> Result<Option<ChatTemplate>> { self.chat_format() }or bump the crate’s major version and update consumers.
plugins/local-llm/src/store.rs (1)
11-12: Unused migration flag?ModelSelectionMigrated isn’t referenced in ext.rs; either wire it into migration or remove to avoid dead code.
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (3)
72-75: Removeas any; keep ModelSelection strongly typed.Apply this diff:
- const selection: ModelSelection = { type: "Predefined", [model.key]: model.key } as any; + const selection = { type: "Predefined", [model.key]: model.key } as Extract<ModelSelection, { type: "Predefined" }>;
80-80: Silence unhandled Promises for fire-and-forget calls.
Prefix withvoidto satisfy linters without changing behavior.Apply this diff:
- localLlmCommands.restartServer(); + void localLlmCommands.restartServer();- localLlmCommands.restartServer(); + void localLlmCommands.restartServer();Also applies to: 95-95
151-151: Prefix Promise-returningopen(...)withvoid.
Prevents unhandled-Promise lint warnings.Apply this diff:
- open("https://docs.hyprnote.com/pro/cloud"); + void open("https://docs.hyprnote.com/pro/cloud");
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (9)
Cargo.lockis excluded by!**/*.lockplugins/local-llm/js/bindings.gen.tsis excluded by!**/*.gen.tsplugins/local-llm/permissions/autogenerated/commands/get_current_model_selection.tomlis excluded by!plugins/**/permissions/**plugins/local-llm/permissions/autogenerated/commands/list_custom_models.tomlis excluded by!plugins/**/permissions/**plugins/local-llm/permissions/autogenerated/commands/set_current_model_selection.tomlis excluded by!plugins/**/permissions/**plugins/local-llm/permissions/autogenerated/reference.mdis excluded by!plugins/**/permissions/**plugins/local-llm/permissions/default.tomlis excluded by!plugins/**/permissions/**plugins/local-llm/permissions/schemas/schema.jsonis excluded by!plugins/**/permissions/**pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (20)
admin/server/package.json(1 hunks)apps/admin/package.json(1 hunks)apps/desktop/package.json(5 hunks)apps/desktop/src-tauri/capabilities/default.json(1 hunks)apps/desktop/src/components/settings/components/ai/llm-local-view.tsx(3 hunks)apps/pro/package.json(1 hunks)crates/gguf/src/lib.rs(2 hunks)crates/llama/src/lib.rs(1 hunks)packages/obsidian/package.json(1 hunks)packages/tiptap/package.json(2 hunks)packages/ui/package.json(1 hunks)packages/utils/package.json(2 hunks)plugins/local-llm/Cargo.toml(2 hunks)plugins/local-llm/build.rs(1 hunks)plugins/local-llm/src/commands.rs(2 hunks)plugins/local-llm/src/error.rs(1 hunks)plugins/local-llm/src/ext.rs(3 hunks)plugins/local-llm/src/lib.rs(1 hunks)plugins/local-llm/src/model.rs(1 hunks)plugins/local-llm/src/store.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit configuration file
**/*.{js,ts,tsx,rs}: 1. Do not add any error handling. Keep the existing one.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/local-llm/build.rsplugins/local-llm/src/commands.rsplugins/local-llm/src/error.rsplugins/local-llm/src/lib.rsplugins/local-llm/src/store.rscrates/gguf/src/lib.rscrates/llama/src/lib.rsplugins/local-llm/src/ext.rsplugins/local-llm/src/model.rsapps/desktop/src/components/settings/components/ai/llm-local-view.tsx
🧬 Code graph analysis (6)
plugins/local-llm/src/commands.rs (1)
plugins/local-llm/src/ext.rs (6)
list_custom_models(23-25)list_custom_models(264-294)get_current_model_selection(28-28)get_current_model_selection(297-314)set_current_model_selection(29-30)set_current_model_selection(317-330)
plugins/local-llm/src/lib.rs (2)
plugins/local-llm/src/commands.rs (3)
list_custom_models(120-124)get_current_model_selection(128-132)set_current_model_selection(136-142)plugins/local-llm/src/ext.rs (6)
list_custom_models(23-25)list_custom_models(264-294)get_current_model_selection(28-28)get_current_model_selection(297-314)set_current_model_selection(29-30)set_current_model_selection(317-330)
crates/gguf/src/lib.rs (3)
crates/gguf/src/utils.rs (9)
reader(13-13)reader(15-15)reader(18-18)reader(20-20)reader(66-66)reader(68-68)read_string(24-33)skip_value(35-81)read_versioned_size(6-22)crates/llama/src/lib.rs (1)
new(309-366)crates/gguf/src/value.rs (1)
try_from(24-41)
plugins/local-llm/src/ext.rs (2)
plugins/local-llm/src/commands.rs (5)
list_custom_models(120-124)get_current_model(95-99)set_current_model(111-116)get_current_model_selection(128-132)set_current_model_selection(136-142)owhisper/owhisper-config/src/lib.rs (1)
data_dir(52-54)
plugins/local-llm/src/model.rs (3)
plugins/local-llm/src/commands.rs (1)
models_dir(7-9)plugins/local-llm/src/ext.rs (2)
models_dir(12-12)models_dir(49-51)plugins/local-llm/src/manager.rs (1)
new(23-35)
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)
apps/desktop/src/components/settings/components/ai/shared.tsx (1)
LLMModel(46-53)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-14)
🔇 Additional comments (18)
packages/ui/package.json (1)
48-53: Maintain @tanstack/react-query as a devDependency — no@tanstack/react-queryimports detected inpackages/uisource; zod upgrade to ^4.1.5 is safe.packages/utils/package.json (1)
15-15: AI SDK bumps look fine; double-check import surface stabilityUpdates to
@ai-sdk/openai-compatibleandaiare patch-level; risk is low. Approving as-is.Also applies to: 25-25, 33-33
apps/pro/package.json (1)
10-17: Confirm cross-package Zod major version mismatch
apps/pro/package.json pins Zod v3.25.76 (line 19), while packages/ui, apps/admin, admin/server, and apps/desktop use Zod v4.1.5; if any Zod schemas/types are shared across packages, verify compatibility or align versions.packages/tiptap/package.json (1)
22-43: No renamed or removed Tiptap extensions detected
All imports inpackages/tiptapmatch the 3.4.1 package names; bump is safe to merge.apps/admin/package.json (1)
19-45: Broad minor/patch upgrades approvedUpgrades for TanStack, Mantine, ai, better-auth, zod look routine. No script/public API changes here.
Also applies to: 47-49, 53-53, 57-57
plugins/local-llm/Cargo.toml (1)
30-30: Add hypr-gguf dependency — LGTM.Matches the new
GgufExtusage across the codebase.plugins/local-llm/build.rs (1)
14-16: Commands wiring LGTMNew commands are correctly added and match those exposed in lib.rs.
plugins/local-llm/src/lib.rs (1)
56-58: Specta exposure LGTMNew commands are properly exposed in collect_commands.
plugins/local-llm/src/store.rs (1)
6-6: StoreKey addition LGTMModelSelection key aligns with new selection flow.
apps/desktop/package.json (1)
38-38: Dependency bumps: sanity-check compatibilityInstall workspace dependencies at the monorepo root (
pnpm installoryarn install), then inapps/desktoprun:cd apps/desktop npm run typecheck npm run buildConfirm all
workspace:^Tauri plugin versions resolve correctly and remain mutually compatible.plugins/local-llm/src/commands.rs (4)
1-1: Imports update LGTMNew types are used; no unused imports.
117-124: list_custom_models passthrough LGTMCommand mirrors plugin API and error mapping is consistent with the file.
126-132: get_current_model_selection LGTMMatches ext API; good.
134-142: set_current_model_selection LGTMDelegation and error mapping consistent.
plugins/local-llm/src/ext.rs (2)
187-194: Pre-start model existence check LGTMGood early failure when the selected model file is missing.
263-294: Custom models listing LGTM (macOS-only)Enumeration and GGUF name extraction look good; returning empty list on non-macOS is acceptable for now.
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (2)
1-1: Imports align with new plugin API and are all used — good change.Also applies to: 9-9
36-45: React Query wiring looks correct.
Consistent keys and sensible 5s polling for custom models.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
apps/desktop/src/components/toast/model-download.tsx (3)
47-55: Fix: guard against undefined currentLlmModel and align cache keys with inputs.
enabledonly checks STT;currentLlmModel.data!can be undefined at first render. Also, the queryKey should include both models to avoid stale results when selections change.Apply:
- const checkForModelDownload = useQuery({ - enabled: !!currentSttModel.data, - queryKey: ["check-model-downloaded"], + const checkForModelDownload = useQuery({ + enabled: !!currentSttModel.data && !!currentLlmModel.data, + queryKey: ["check-model-downloaded", currentSttModel.data, currentLlmModel.data], queryFn: async () => { const [stt, llm] = await Promise.all([ localSttCommands.isModelDownloaded(currentSttModel.data!), localLlmCommands.isModelDownloaded(currentLlmModel.data!), ]);
65-72: Fix: ensure model is known before polling download state; key by model.This can run before
currentSttModel.dataresolves. Also, keying by model prevents cross-model cache bleed.- const sttModelDownloading = useQuery({ - enabled: !checkForModelDownload.data?.sttModelDownloaded, - queryKey: ["stt-model-downloading"], + const sttModelDownloading = useQuery({ + enabled: !!currentSttModel.data && !checkForModelDownload.data?.sttModelDownloaded, + queryKey: ["stt-model-downloading", currentSttModel.data], queryFn: async () => { return localSttCommands.isModelDownloading(currentSttModel.data!); },
74-81: Fix: mirror gating/keying for LLM download poll.- const llmModelDownloading = useQuery({ - enabled: !checkForModelDownload.data?.llmModelDownloaded, - queryKey: ["llm-model-downloading"], + const llmModelDownloading = useQuery({ + enabled: !!currentLlmModel.data && !checkForModelDownload.data?.llmModelDownloaded, + queryKey: ["llm-model-downloading", currentLlmModel.data], queryFn: async () => { return localLlmCommands.isModelDownloading(currentLlmModel.data!); },
♻️ Duplicate comments (2)
plugins/local-llm/src/ext.rs (2)
294-309: Fix to struct-like ModelSelection::Predefined { key }ModelSelection::Predefined appears to be struct-like now. The tuple-style construction will break compile/serialization and the stored shape.
Apply this diff:
- let current_model = self.get_current_model()?; - let selection = crate::ModelSelection::Predefined(current_model); + let current_model = self.get_current_model()?; + let selection = crate::ModelSelection::Predefined { key: current_model };Optionally verify there are no remaining tuple-style usages:
#!/bin/bash # Find tuple-style constructions/matches of Predefined(...) rg -nP -C2 --type=rs 'ModelSelection::Predefined\s*\('
312-324: Align match to struct-like variant for back-compat write-throughPattern-match the new shape so the legacy Model key is kept in sync when selecting a predefined model.
Apply this diff:
- if let crate::ModelSelection::Predefined(supported_model) = &model { - let _ = store.set(crate::StoreKey::Model, supported_model.clone()); + if let crate::ModelSelection::Predefined { key } = &model { + let _ = store.set(crate::StoreKey::Model, key.clone()); }
🧹 Nitpick comments (2)
plugins/local-llm/src/ext.rs (1)
263-292: Avoid parsing non-files: use is_file() instead of exists()exists() allows directories; guard with is_file() to prevent handing directories to GgufExt::model_name().
Apply this diff:
- if path.exists() { + if path.is_file() { let name = { use hypr_gguf::GgufExt; path.model_name() };apps/desktop/src/components/toast/model-download.tsx (1)
129-136: Avoid false-positive toasts while existence queries are loading.Gate the effect until both existence queries have settled.
useEffect(() => { if (!checkForModelDownload.data) { return; } + if (sttModelExists.isLoading || llmModelExists.isLoading) { + return; + } + if (checkForModelDownload.data?.sttModelDownloaded && checkForModelDownload.data?.llmModelDownloaded) { return; }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
apps/desktop/src/components/toast/model-download.tsx(6 hunks)plugins/local-llm/src/ext.rs(3 hunks)plugins/local-llm/src/store.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit configuration file
**/*.{js,ts,tsx,rs}: 1. Do not add any error handling. Keep the existing one.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/local-llm/src/store.rsapps/desktop/src/components/toast/model-download.tsxplugins/local-llm/src/ext.rs
🧬 Code graph analysis (1)
plugins/local-llm/src/ext.rs (1)
plugins/local-llm/src/commands.rs (5)
list_custom_models(120-124)get_current_model(95-99)set_current_model(111-116)get_current_model_selection(128-132)set_current_model_selection(136-142)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-14)
🔇 Additional comments (11)
plugins/local-llm/src/store.rs (1)
4-8: Confirm key stability for persisted dataAdding ModelSelection mid-enum should be safe only if the store uses string keys (via strum::Display) rather than ordinal positions. Please confirm tauri_plugin_store2 maps by name so previously persisted keys aren’t broken by variant reordering.
plugins/local-llm/src/ext.rs (2)
23-25: API contract: macOS-only behavior for list_custom_modelsTrait now exposes list_custom_models but the impl returns an empty list on non-macOS. Confirm this OS-specific behavior is intentional and surfaced at the caller/UI level.
Also applies to: 28-30
187-192: Verify models_dir targetSelection-based pathing looks good. Double-check that models_dir() -> .../ttt is the intended directory; otherwise start_server will always hit ModelNotDownloaded.
apps/desktop/src/components/toast/model-download.tsx (8)
4-4: LGTM: connectorCommands import is used and necessary.
27-31: LGTM: provider query is well-keyed and scoped.
37-41: LGTM: hyprcloud-enabled query is correct and used.
42-46: LGTM: custom-llm-enabled query is correct and used.
210-214: LGTM: deps expanded to react to provider/flag changes.
107-127: No drift in provider-source values; optional allowlist refactor
AllowedllmProviderSource.datavalues are confirmed as"openai","gemini","openrouter", and"others"across the codebase; no new sources detected. To improve readability, optionally replace the OR chain with:- if ( - hyprcloudEnabled.data || customLlmEnabled.data || llmProviderSource.data === "openai" - || llmProviderSource.data === "gemini" || llmProviderSource.data === "openrouter" - || llmProviderSource.data === "others" - ) { + const cloudSources = ["openai", "gemini", "openrouter", "others"]; + if (hyprcloudEnabled.data || customLlmEnabled.data || cloudSources.includes(llmProviderSource.data as string)) { return true; }
83-85: Verify STT provider identifier EnsurelocalSttCommands.getProvider()returns the exact string"Custom"before short-circuiting in model-download.tsx (lines 83–85, 86–89, 103–104).
32-36: No changes needed for provider-source string checks. ConfirmedgetProviderSource()only returns"openai","gemini","openrouter", or"others".
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (3)
plugins/local-llm/src/model.rs (1)
21-26: Serde tagging fixed (uses adjacently tagged enum).
This resolves the prior tuple-variant tagging issue; TS consumers using selection.content.* align with this.plugins/local-llm/src/ext.rs (1)
294-309: Selection migration logic—LGTM.
Backfills ModelSelection from legacy key and persists it; matches the new Predefined { key } shape.apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)
274-314: Windows-safe basename for custom model path.
Use a cross-platform split to display the filename.- <span className="text-xs text-gray-500">{customModel.path.split("/").slice(-1)[0]}</span> + <span className="text-xs text-gray-500">{customModel.path.split(/[/\\]/).slice(-1)[0]}</span>
🧹 Nitpick comments (1)
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (1)
36-40: Query setup looks good; consider reducing disk churn.
Optional: replace fixed 5s polling of custom models with focus-based refetch or a longer staleTime to avoid repeated GGUF header reads.- const customModels = useQuery({ + const customModels = useQuery({ queryKey: ["custom-models"], queryFn: () => localLlmCommands.listCustomModels(), - refetchInterval: 5000, + refetchOnWindowFocus: true, + staleTime: 30_000, });Also applies to: 41-45
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
plugins/local-llm/js/bindings.gen.tsis excluded by!**/*.gen.ts
📒 Files selected for processing (3)
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx(3 hunks)plugins/local-llm/src/ext.rs(3 hunks)plugins/local-llm/src/model.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit configuration file
**/*.{js,ts,tsx,rs}: 1. Do not add any error handling. Keep the existing one.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/local-llm/src/model.rsplugins/local-llm/src/ext.rsapps/desktop/src/components/settings/components/ai/llm-local-view.tsx
🧬 Code graph analysis (3)
plugins/local-llm/src/model.rs (3)
plugins/local-llm/src/ext.rs (2)
models_dir(12-12)models_dir(49-51)plugins/local-llm/src/commands.rs (1)
models_dir(7-9)plugins/local-llm/src/manager.rs (1)
new(23-35)
plugins/local-llm/src/ext.rs (1)
plugins/local-llm/src/commands.rs (5)
list_custom_models(120-124)get_current_model(95-99)set_current_model(111-116)get_current_model_selection(128-132)set_current_model_selection(136-142)
apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (2)
apps/desktop/src/components/settings/components/ai/shared.tsx (1)
LLMModel(46-53)packages/ui/src/lib/utils.ts (1)
cn(4-6)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (macos, macos-14)
- GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (8)
plugins/local-llm/src/model.rs (2)
15-20: CustomModelInfo payload looks good.
Serializable shape is minimal and sufficient.
28-34: Path resolution logic—LGTM.
Predefined joins models_dir; Custom uses absolute path string.plugins/local-llm/src/ext.rs (4)
23-31: New selection API surface—LGTM.
Trait additions (custom listing + selection getters/setters) are coherent.
187-192: Server start now respects ModelSelection.
Existence check against selection-resolved path is correct.
261-292: Potential panic on macOS data dir.
dirs::data_dir().unwrap() can be None in some environments, which would crash list_custom_models. Verify this is safe for all target distributions.
311-324: Back-compat write-through—LGTM.
Updating legacy StoreKey::Model when setting a Predefined selection keeps callers of the old API working.apps/desktop/src/components/settings/components/ai/llm-local-view.tsx (2)
51-60: State sync from ModelSelection—LGTM.
Correctly maps Predefined via content.key and Custom via content.path.
66-69: Predefined selection payload shape—LGTM.
Matches { type: "Predefined", content: { key } }.
| const handleCustomModelSelection = async (customModel: CustomModelInfo) => { | ||
| setSelectedLLMModel(`custom-${customModel.path}`); | ||
|
|
||
| const selection: ModelSelection = { type: "Custom", content: { path: customModel.path } }; | ||
| await localLlmCommands.setCurrentModelSelection(selection); | ||
| queryClient.invalidateQueries({ queryKey: ["current-model-selection"] }); | ||
|
|
||
| setCustomLLMEnabledMutation.mutate(false); | ||
| setHyprCloudEnabledMutation.mutate(false); | ||
| setOpenAccordion(null); | ||
|
|
||
| localLlmCommands.restartServer(); | ||
| }; | ||
|
|
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Custom selection flow—LGTM; verify command availability.
Ensure localLlmCommands.restartServer exists/exports in the plugin.
🏁 Script executed:
#!/bin/bash
rg -nP -C2 '\brestartServer\b' -- apps plugins packages || trueLength of output: 1798
Implement and export restartServer in localLlmCommands
Ripgrep across apps, plugins, and packages only shows calls to localLlmCommands.restartServer()—no definition or export exists. Add a restartServer implementation and export it from the plugin’s command interface to prevent runtime errors.
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/llm-local-view.tsx around
lines 78 to 91, the call localLlmCommands.restartServer() has no implementation
or export; add a restartServer method to the localLlmCommands plugin command
surface, implement it to asynchronously restart the local LLM server (e.g.,
await stopServer(); await startServer(); handle and surface errors), ensure it
returns a Promise<void>, add the method to the plugin's TypeScript command
interface/type, and export it from the plugin entry so callers like this
component can import and invoke it without runtime errors.
No description provided.