Skip to content

Conversation

@devin-ai-integration
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot commented Feb 10, 2026

feat: simplify connection status check for Ollama and LM Studio

Summary

Re-implements the feature from #2968 (which was reverted) with a simpler approach. Shows connection status for local LLM providers (Ollama, LM Studio) in AI settings.

Key simplifications vs #2968:

  • 1 new file instead of 2 — merged connection check + hook into a single use-local-provider-status.ts (67 lines vs original 122)
  • No Effect.js for connection check — uses plain fetch + AbortController for 2s timeout
  • No LMStudio SDK for connection check — pings the OpenAI-compatible /v1/models endpoint (works for both providers)
  • No changes to useConfiguredMapping — status logic stays in the rendering layer, avoiding the complex eligibility bypass logic from the original

Features (same as original):

  • Status badge on provider cards (Connected / Not Running / spinner)
  • "Connect" button when provider is not running (triggers manual recheck)
  • Green dot indicator in provider dropdown when connected
  • Local providers disabled in dropdown when not connected
  • Download and model library links for Ollama and LM Studio

Review & Testing Checklist for Human

  • Verify /v1/models endpoint works for both providers: The original used Ollama's /api/tags and LMStudio's WebSocket SDK. This simplified version pings ${baseUrl}/models for both. Confirm Ollama responds correctly on http://127.0.0.1:11434/v1/models and LM Studio on http://127.0.0.1:1234/v1/models
  • Test with Ollama running/stopped: Start Ollama, open AI settings → verify "Connected" badge. Stop Ollama → verify "Not Running" appears (up to 15s)
  • Test with LM Studio running/stopped: Same as above
  • Test "Connect" button: When showing "Not Running", click Connect and verify it triggers a status recheck
  • Verify provider dropdown behavior: Local providers should be disabled when not running, show green dot when connected, and other providers should be unaffected
  • Check accordion trigger layout: The status badge and Connect button shouldn't break the layout of non-local provider cards

Notes

  • This was not tested in a running Tauri desktop environment — human testing with actual Ollama/LM Studio is essential
  • The useLocalProviderStatus hook is called for every provider via NonHyprProviderCard, but the enabled: isLocal flag prevents actual network requests for non-local providers

Link to Devin run: https://app.devin.ai/sessions/853696731f4448a4a1633f12ddd7f8b1
Requested by: @ComputelessComputer


Open with Devin

@netlify
Copy link

netlify bot commented Feb 10, 2026

Deploy Preview for hyprnote canceled.

Name Link
🔨 Latest commit 589f3b1
🔍 Latest deploy log https://app.netlify.com/projects/hyprnote/deploys/698bfffcc6082700080019ab

@netlify
Copy link

netlify bot commented Feb 10, 2026

Deploy Preview for hyprnote-storybook canceled.

Name Link
🔨 Latest commit 589f3b1
🔍 Latest deploy log https://app.netlify.com/projects/hyprnote-storybook/deploys/698bfffc83bed9000807a4a1

@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR that start with 'DevinAI' or '@devin'.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Copy link
Contributor Author

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 5 additional findings in Devin Review.

Open in Devin Review

devin-ai-integration bot and others added 2 commits February 11, 2026 12:47
Co-Authored-By: john@hyprnote.com <john@hyprnote.com>
Extract the standalone "See our setup guide for detailed instructions."
text from provider descriptions and add a compact "Setup guide" link
next to the existing "Available models" link in the UI. This makes the
provider descriptions shorter and moves setup links into a consistent
place beside model links so users can discover setup instructions
without cluttering the description.

Changes:
- Removed inline "See our setup guide..." sentences from LM Studio and Ollama provider descriptions.
- Added an optional "setup" link field to provider link configs and populated it for LM Studio and Ollama.
- Updated shared UI to render the "Setup guide" link (with separator) adjacent to the "Available models" link, wrapped together for layout consistency.
@ComputelessComputer ComputelessComputer force-pushed the devin/1770686281-simplify-local-provider-status branch 2 times, most recently from 0c3f3a2 to 18b1f1e Compare February 11, 2026 04:01
ComputelessComputer and others added 3 commits February 11, 2026 13:02
…der-status.ts

Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
The Local setup guide link and its HelpCircle icon were removed from the
Configure Providers header in the LLM settings component. This
simplifies the header UI by eliminating the external documentation link
that appeared beside the title, leaving only the concise section title.

- Deleted imports for HelpCircle and the surrounding anchor element linking to the local LLM setup guide.
- Replaced the header block with a single h3 element to keep layout consistent with the rest of the section.
Adjust UI spacing and simplify JSX fragments to make gaps consistent
between the download, models, and guide links. Change the models link
container gap from 2 to 4 and remove an unnecessary fragment wrapper
around the setup link.

Also reformat the local provider status assignment for consistent
indentation and readability: collapse the multiline ternary into a more
compact form without changing logic.
Copy link
Contributor Author

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 8 additional findings in Devin Review.

Open in Devin Review

Comment on lines +59 to +63
const status: LocalProviderStatus = query.isLoading
? "checking"
: query.data
? "connected"
: "disconnected";
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 query.isLoading is false on refetch, so "Connect" button never shows loading feedback

When the user clicks the "Connect" button to recheck a local provider's status, there is no visual feedback because the "checking" state is derived from query.isLoading, which is only true on the initial fetch when no cached data exists.

Root Cause

In React Query, isLoading is true only when isPending && isFetching — i.e., there's no cached data and a fetch is in progress. After the first connection check fails (query.data = false), subsequent refetches (triggered by the "Connect" button via query.refetch()) set isFetching = true but keep isLoading = false because cached data (false) already exists.

The status derivation at use-local-provider-status.ts:59-63:

const status: LocalProviderStatus = query.isLoading
  ? "checking"
  : query.data
    ? "connected"
    : "disconnected";

After the first failed check, clicking "Connect" triggers refetchStatus()query.refetch(). During this refetch:

  • query.isLoading is false (cached data exists)
  • query.data is false (cached failed result)
  • So status stays "disconnected" throughout the refetch

This defeats the intended behavior at apps/desktop/src/components/settings/ai/shared/index.tsx:180-181 where disabled={localStatus === "checking"} was meant to disable the button during the recheck, and at line 171 where StatusBadge was meant to show a spinner. Neither fires because status never becomes "checking" on refetch.

Impact: The "Connect" button remains clickable with no loading indicator while the recheck is in progress, giving the user no feedback that anything is happening. Users may click it repeatedly.

Suggested change
const status: LocalProviderStatus = query.isLoading
? "checking"
: query.data
? "connected"
: "disconnected";
const status: LocalProviderStatus = query.isLoading || (query.isFetching && !query.data)
? "checking"
: query.data
? "connected"
: "disconnected";
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +19 to +20
const origin = new URL(baseUrl).origin;
const res = await tauriFetch(`${baseUrl}/models`, {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Origin header construction creates invalid URL

The code constructs an origin from baseUrl, then sends it to an endpoint derived from the same baseUrl. This creates a mismatch:

  • Origin: http://127.0.0.1:11434 (extracted from baseUrl)
  • Request URL: http://127.0.0.1:11434/v1/models

The Origin header should match the origin of the page making the request, not the target server. In Tauri apps, this should typically be the app's origin (e.g., tauri://localhost).

// Remove this line - Tauri fetch handles Origin automatically
// headers: { Origin: origin },

// Or use a fixed Tauri origin if CORS checking is needed:
const res = await tauriFetch(`${baseUrl}/models`, {
  signal: controller.signal,
  // Origin header typically not needed in Tauri context
});

This could cause CORS preflight failures or unexpected behavior depending on how LM Studio/Ollama validate the Origin header.

Spotted by Graphite Agent

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant