-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Description
Bug Description
Category-based delegation using delegate_task(category='quick') fails with "Model not configured for category" error, even though the category is correctly configured with an Ollama model (ollama/ministral-3:14b-32k-agent).
Direct agent routing using delegate_task(subagent_type='explore') works perfectly with the same Ollama model — only category-based routing fails.
Steps to Reproduce
- Configure Ollama provider in OpenCode (
~/.config/opencode/opencode.json) - Set category model in oh-my-opencode config:
"categories": { "quick": { "agents": ["explore", "librarian"], "default_model": "ollama/ministral-3:14b-32k-agent" } }
- Populate provider cache with Ollama models (required because OpenCode SDK doesn't auto-discover local models)
- Attempt category delegation:
delegate_task(category='quick', load_skills=[], run_in_background=true, prompt='List files')
- Observe error: "Model not configured for category"
- Compare with direct agent routing (works):
delegate_task(subagent_type='explore', load_skills=[], run_in_background=true, prompt='List files')
Expected Behavior
Category delegation should resolve ollama/ministral-3:14b-32k-agent from categories.quick.default_model and delegate to the explore agent using Ollama, the same way direct agent routing does.
Actual Behavior
Error returned:
Model not configured for category "quick".
Configure in one of:
1. OpenCode: Set "model" in opencode.json
2. Oh-My-OpenCode: Set category model in oh-my-opencode.json
3. Provider: Connect a provider with available models
Current category: quick
Available categories: visual-engineering, ultrabrain, deep, artistry, quick, ...
Note: "quick" IS listed in available categories, confirming config was loaded. The failure is in model resolution, not category discovery.
Root Cause
fetchAvailableModels() in src/shared/model-availability.ts line 196 assumes modelIds is string[], but manually-populated Ollama caches use object[] format with metadata ({id, provider, context, output}). This causes:
// Line 196 — string concatenation with object produces garbage
modelSet.add(`${providerId}/${modelId}`)
// Actual result: "ollama/[object Object]"The availableModels Set then contains invalid entries → resolveModelPipeline() can't find the model → error at line 869.
Why direct routing works: delegate_task(subagent_type='explore') bypasses fetchAvailableModels() entirely, going straight to agent config.
Doctor Output
Manual environment info:
- oh-my-opencode: 3.2.3
- @opencode-ai/plugin: 1.1.51
- OpenCode SDK: 1.1.19
- Ollama: running on http://localhost:11434
- Models: ministral-3:14b-32k-agent, qwen3-coder:32k-agent, lfm2.5-thinking:agent
- Provider cache: manually populated (6 Ollama models)
Error Logs
Error originates at src/tools/delegate-task/executor.ts line 869:
if (!categoryModel && !actualModel) {
return {
error: `Model not configured for category "${args.category}". ...`
}
}Both categoryModel and actualModel are undefined after resolveModelPipeline() runs at line 810.
Configuration
{
"categories": {
"quick": {
"agents": ["explore", "librarian"],
"default_model": "ollama/ministral-3:14b-32k-agent",
"description": "Fast trivial tasks - local Ollama (free)"
}
},
"agents": {
"explore": {
"model": "ollama/ministral-3:14b-32k-agent",
"temperature": 0.2,
"stream": false,
"description": "Codebase exploration - Ollama local model"
}
}
}Provider cache (~/.cache/oh-my-opencode/provider-models.json):
{
"models": {
"ollama": [
{ "id": "ministral-3:14b-32k-agent", "provider": "ollama", "context": 32768, "output": 8192 }
]
},
"connected": ["anthropic", "google", "ollama", "openai", "opencode"]
}Fix Submitted
PR #1509: Handles both string[] and object[] cache formats in fetchAvailableModels().
- Backward compatible with existing
string[]format - Extracts
.idfrom object entries when present - Skips invalid entries gracefully
- 4 new test cases, 48 total passing
Additional Context — Upstream Root Cause
Manual cache population is required because OpenCode SDK's client.model.list() does not return local Ollama models. This is a known issue at the OpenCode level:
OpenCode SDK Issues (Root Cause)
- Local Ollama models not included in model listing, breaking plugin model discovery anomalyco/opencode#12243 — Our report: local Ollama models not in model listing
- Auto-discover models from OpenAI-compatible provider endpoints anomalyco/opencode#6231 — Canonical feature request: auto-discover models from OpenAI-compatible endpoints (assigned to @thdxr)
- [FEATURE]: Ollama - Multiple Ollama Cloud Models are missing anomalyco/opencode#7873 — Ollama Cloud models missing
- Model List differ between the CLI and Web. anomalyco/opencode#9581 — Model list differs between CLI and Web
- The cache of the Provider‘s Models cannot be updatable anomalyco/opencode#7714 — Provider's Models cache not updatable
Active Upstream Development
OpenCode is actively building auto-discovery on the models-endpoint feature branch:
- feat(opencode): add auto model detection for OpenAI-compatible providers anomalyco/opencode#8359 [MERGED →
models-endpoint, Jan 30] — Auto model detection via/v1/models - opencode: added logic to probe loaded models from lmstudio, ollama an… anomalyco/opencode#11951 [OPEN →
models-endpoint, Feb 3] — Ollama/LM Studio/llama.cpp probing with dedicatedprovider/local/ollama.ts
Feature branch has not yet been promoted to dev, so PR #1509 serves as a bridge fix covering the gap until OpenCode ships auto-discovery.
Prior Contribution
- PR docs: add Ollama streaming NDJSON issue guide and workaround #1197 (merged 2026-01-28) — NDJSON parser for Ollama streaming responses (different issue)
Operating System
Linux
OpenCode Version
1.1.51