Skip to content

[Bug]: Category delegation fails to resolve Ollama models despite correct configuration #1508

@rooftop-Owl

Description

@rooftop-Owl

Bug Description

Category-based delegation using delegate_task(category='quick') fails with "Model not configured for category" error, even though the category is correctly configured with an Ollama model (ollama/ministral-3:14b-32k-agent).

Direct agent routing using delegate_task(subagent_type='explore') works perfectly with the same Ollama model — only category-based routing fails.


Steps to Reproduce

  1. Configure Ollama provider in OpenCode (~/.config/opencode/opencode.json)
  2. Set category model in oh-my-opencode config:
    "categories": {
      "quick": {
        "agents": ["explore", "librarian"],
        "default_model": "ollama/ministral-3:14b-32k-agent"
      }
    }
  3. Populate provider cache with Ollama models (required because OpenCode SDK doesn't auto-discover local models)
  4. Attempt category delegation:
    delegate_task(category='quick', load_skills=[], run_in_background=true, prompt='List files')
  5. Observe error: "Model not configured for category"
  6. Compare with direct agent routing (works):
    delegate_task(subagent_type='explore', load_skills=[], run_in_background=true, prompt='List files')

Expected Behavior

Category delegation should resolve ollama/ministral-3:14b-32k-agent from categories.quick.default_model and delegate to the explore agent using Ollama, the same way direct agent routing does.


Actual Behavior

Error returned:

Model not configured for category "quick".

Configure in one of:
1. OpenCode: Set "model" in opencode.json
2. Oh-My-OpenCode: Set category model in oh-my-opencode.json
3. Provider: Connect a provider with available models

Current category: quick
Available categories: visual-engineering, ultrabrain, deep, artistry, quick, ...

Note: "quick" IS listed in available categories, confirming config was loaded. The failure is in model resolution, not category discovery.


Root Cause

fetchAvailableModels() in src/shared/model-availability.ts line 196 assumes modelIds is string[], but manually-populated Ollama caches use object[] format with metadata ({id, provider, context, output}). This causes:

// Line 196 — string concatenation with object produces garbage
modelSet.add(`${providerId}/${modelId}`)
// Actual result: "ollama/[object Object]"

The availableModels Set then contains invalid entries → resolveModelPipeline() can't find the model → error at line 869.

Why direct routing works: delegate_task(subagent_type='explore') bypasses fetchAvailableModels() entirely, going straight to agent config.


Doctor Output

Manual environment info:
- oh-my-opencode: 3.2.3
- @opencode-ai/plugin: 1.1.51
- OpenCode SDK: 1.1.19
- Ollama: running on http://localhost:11434
- Models: ministral-3:14b-32k-agent, qwen3-coder:32k-agent, lfm2.5-thinking:agent
- Provider cache: manually populated (6 Ollama models)

Error Logs

Error originates at src/tools/delegate-task/executor.ts line 869:

if (!categoryModel && !actualModel) {
  return {
    error: `Model not configured for category "${args.category}". ...`
  }
}

Both categoryModel and actualModel are undefined after resolveModelPipeline() runs at line 810.


Configuration

{
  "categories": {
    "quick": {
      "agents": ["explore", "librarian"],
      "default_model": "ollama/ministral-3:14b-32k-agent",
      "description": "Fast trivial tasks - local Ollama (free)"
    }
  },
  "agents": {
    "explore": {
      "model": "ollama/ministral-3:14b-32k-agent",
      "temperature": 0.2,
      "stream": false,
      "description": "Codebase exploration - Ollama local model"
    }
  }
}

Provider cache (~/.cache/oh-my-opencode/provider-models.json):

{
  "models": {
    "ollama": [
      { "id": "ministral-3:14b-32k-agent", "provider": "ollama", "context": 32768, "output": 8192 }
    ]
  },
  "connected": ["anthropic", "google", "ollama", "openai", "opencode"]
}

Fix Submitted

PR #1509: Handles both string[] and object[] cache formats in fetchAvailableModels().

  • Backward compatible with existing string[] format
  • Extracts .id from object entries when present
  • Skips invalid entries gracefully
  • 4 new test cases, 48 total passing

Additional Context — Upstream Root Cause

Manual cache population is required because OpenCode SDK's client.model.list() does not return local Ollama models. This is a known issue at the OpenCode level:

OpenCode SDK Issues (Root Cause)

Active Upstream Development

OpenCode is actively building auto-discovery on the models-endpoint feature branch:

Feature branch has not yet been promoted to dev, so PR #1509 serves as a bridge fix covering the gap until OpenCode ships auto-discovery.

Prior Contribution


Operating System

Linux

OpenCode Version

1.1.51

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions