-
Notifications
You must be signed in to change notification settings - Fork 537
feat: simplify connection status check for Ollama and LM Studio #3804
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: simplify connection status check for Ollama and LM Studio #3804
Conversation
✅ Deploy Preview for hyprnote canceled.
|
✅ Deploy Preview for hyprnote-storybook canceled.
|
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
apps/desktop/src/components/settings/ai/shared/use-local-provider-status.ts
Outdated
Show resolved
Hide resolved
Co-Authored-By: john@hyprnote.com <john@hyprnote.com>
Extract the standalone "See our setup guide for detailed instructions." text from provider descriptions and add a compact "Setup guide" link next to the existing "Available models" link in the UI. This makes the provider descriptions shorter and moves setup links into a consistent place beside model links so users can discover setup instructions without cluttering the description. Changes: - Removed inline "See our setup guide..." sentences from LM Studio and Ollama provider descriptions. - Added an optional "setup" link field to provider link configs and populated it for LM Studio and Ollama. - Updated shared UI to render the "Setup guide" link (with separator) adjacent to the "Available models" link, wrapped together for layout consistency.
0c3f3a2 to
18b1f1e
Compare
…der-status.ts Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
The Local setup guide link and its HelpCircle icon were removed from the Configure Providers header in the LLM settings component. This simplifies the header UI by eliminating the external documentation link that appeared beside the title, leaving only the concise section title. - Deleted imports for HelpCircle and the surrounding anchor element linking to the local LLM setup guide. - Replaced the header block with a single h3 element to keep layout consistent with the rest of the section.
Adjust UI spacing and simplify JSX fragments to make gaps consistent between the download, models, and guide links. Change the models link container gap from 2 to 4 and remove an unnecessary fragment wrapper around the setup link. Also reformat the local provider status assignment for consistent indentation and readability: collapse the multiline ternary into a more compact form without changing logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| const status: LocalProviderStatus = query.isLoading | ||
| ? "checking" | ||
| : query.data | ||
| ? "connected" | ||
| : "disconnected"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 query.isLoading is false on refetch, so "Connect" button never shows loading feedback
When the user clicks the "Connect" button to recheck a local provider's status, there is no visual feedback because the "checking" state is derived from query.isLoading, which is only true on the initial fetch when no cached data exists.
Root Cause
In React Query, isLoading is true only when isPending && isFetching — i.e., there's no cached data and a fetch is in progress. After the first connection check fails (query.data = false), subsequent refetches (triggered by the "Connect" button via query.refetch()) set isFetching = true but keep isLoading = false because cached data (false) already exists.
The status derivation at use-local-provider-status.ts:59-63:
const status: LocalProviderStatus = query.isLoading
? "checking"
: query.data
? "connected"
: "disconnected";After the first failed check, clicking "Connect" triggers refetchStatus() → query.refetch(). During this refetch:
query.isLoadingisfalse(cached data exists)query.dataisfalse(cached failed result)- So
statusstays"disconnected"throughout the refetch
This defeats the intended behavior at apps/desktop/src/components/settings/ai/shared/index.tsx:180-181 where disabled={localStatus === "checking"} was meant to disable the button during the recheck, and at line 171 where StatusBadge was meant to show a spinner. Neither fires because status never becomes "checking" on refetch.
Impact: The "Connect" button remains clickable with no loading indicator while the recheck is in progress, giving the user no feedback that anything is happening. Users may click it repeatedly.
| const status: LocalProviderStatus = query.isLoading | |
| ? "checking" | |
| : query.data | |
| ? "connected" | |
| : "disconnected"; | |
| const status: LocalProviderStatus = query.isLoading || (query.isFetching && !query.data) | |
| ? "checking" | |
| : query.data | |
| ? "connected" | |
| : "disconnected"; | |
Was this helpful? React with 👍 or 👎 to provide feedback.
| const origin = new URL(baseUrl).origin; | ||
| const res = await tauriFetch(`${baseUrl}/models`, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Origin header construction creates invalid URL
The code constructs an origin from baseUrl, then sends it to an endpoint derived from the same baseUrl. This creates a mismatch:
- Origin:
http://127.0.0.1:11434(extracted from baseUrl) - Request URL:
http://127.0.0.1:11434/v1/models
The Origin header should match the origin of the page making the request, not the target server. In Tauri apps, this should typically be the app's origin (e.g., tauri://localhost).
// Remove this line - Tauri fetch handles Origin automatically
// headers: { Origin: origin },
// Or use a fixed Tauri origin if CORS checking is needed:
const res = await tauriFetch(`${baseUrl}/models`, {
signal: controller.signal,
// Origin header typically not needed in Tauri context
});This could cause CORS preflight failures or unexpected behavior depending on how LM Studio/Ollama validate the Origin header.
Spotted by Graphite Agent
Is this helpful? React 👍 or 👎 to let us know.
feat: simplify connection status check for Ollama and LM Studio
Summary
Re-implements the feature from #2968 (which was reverted) with a simpler approach. Shows connection status for local LLM providers (Ollama, LM Studio) in AI settings.
Key simplifications vs #2968:
use-local-provider-status.ts(67 lines vs original 122)fetch+AbortControllerfor 2s timeout/v1/modelsendpoint (works for both providers)useConfiguredMapping— status logic stays in the rendering layer, avoiding the complex eligibility bypass logic from the originalFeatures (same as original):
Review & Testing Checklist for Human
/v1/modelsendpoint works for both providers: The original used Ollama's/api/tagsand LMStudio's WebSocket SDK. This simplified version pings${baseUrl}/modelsfor both. Confirm Ollama responds correctly onhttp://127.0.0.1:11434/v1/modelsand LM Studio onhttp://127.0.0.1:1234/v1/modelsNotes
useLocalProviderStatushook is called for every provider viaNonHyprProviderCard, but theenabled: isLocalflag prevents actual network requests for non-local providersLink to Devin run: https://app.devin.ai/sessions/853696731f4448a4a1633f12ddd7f8b1
Requested by: @ComputelessComputer