Skip to content

Local Ollama models not included in model listing, breaking plugin model discovery #12243

@rooftop-Owl

Description

@rooftop-Owl

Description

When Ollama is registered as a provider in opencode.json, the SDK's model listing does not include Ollama models. The provider appears in the "connected" list, but the models object remains empty for Ollama.

This causes plugins (e.g., oh-my-opencode) that rely on model discovery to fail when routing tasks to local Ollama models.

What works:

  • Ollama provider is recognized (appears in connected array)
  • Direct model usage works when explicitly specified by a plugin
  • Ollama's own API responds correctly (GET /v1/models returns all models)

What doesn't work:

  • Model listing never includes Ollama models
  • Plugin caches (provider-models.json) stay empty for Ollama
  • Category-based delegation in plugins can't discover Ollama models

Plugins

oh-my-opencode 3.2.3


OpenCode version

1.1.51


Steps to reproduce

  1. Install and start Ollama:

    ollama serve
    ollama pull ministral-3:14b-32k-agent
  2. Configure Ollama provider in ~/.config/opencode/opencode.json:

    {
      "provider": {
        "ollama": {
          "name": "Ollama",
          "npm": "@ai-sdk/openai-compatible",
          "options": { "baseURL": "http://localhost:11434/v1" },
          "models": {
            "ministral-3:14b-32k-agent": { "name": "ministral-3:14b-32k-agent" }
          }
        }
      }
    }
  3. Start OpenCode and check provider cache:

    cat ~/.cache/oh-my-opencode/provider-models.json
  4. Observe: "models": {} — Ollama models are missing despite provider being connected.

  5. Verify Ollama API works directly:

    curl http://localhost:11434/v1/models
    # Returns: { "data": [{ "id": "ministral-3:14b-32k-agent", ... }] }

Screenshot and/or share link

Cache file showing the issue:

{
  "models": {},
  "connected": ["openai", "google", "anthropic", "opencode", "ollama"],
  "updatedAt": "2026-02-05T05:24:52.824Z"
}

Note: ollama is in connected but models is empty.


Operating System

Linux


Terminal

SSH


Additional Notes

Current workaround: Manually query ollama list and inject models into the provider cache. This is fragile — gets overwritten on provider refresh.

Suggested approach: When a provider uses npm: "@ai-sdk/openai-compatible" and has a baseURL, query GET {baseURL}/models to discover available models. Ollama (and LM Studio, etc.) all support this OpenAI-compatible endpoint.

Impact: Any plugin relying on model discovery cannot use local models without manual workarounds. This affects the growing ecosystem of users running local models for cost savings and privacy.


Related Issues & Active Development

Awareness note: We've identified significant related work and wanted to cross-reference for visibility.

Canonical Feature Request

Active Development (models-endpoint feature branch)

Related PRs (direct to dev)

Related Issues (same root cause)

Downstream Impact (oh-my-opencode plugin)

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions