This plugin allows your /commands to:
- Queue-up
promptsor other/commandsandsubagentswith arguments - Parallelize the parts you want
- Pass session context to subagents
- Steer the agentic flow from start to finish
If you already know opencode commands, you'll be right at home.
returninstruct main session on command/subtask(s) result - can be chained, supports /commandsparallelrun subtasks concurrently - pending PR merge⚠️ argumentspass arguments with command frontmatter or||message pipe$TURN[n]syntax allows to pipe session context into your command
Requires this PR for parallel features, as well as proper model inheritance (piping the right model and agent to the right subtask and back) to work.
Feature documentation
Use return to tell the main agent what to do after a command completes, supports chaining and triggering other commands. The return prompt is appended to the main session on command or subtask completion.
subtask: true
return: Look again, challenge the findings, then implement the valid fixes.
---
Review the PR# $ARGUMENTS for bugs.For multiple sequential prompts, use an array:
subtask: true
return:
- Implement the fix
- Run the tests
---
Find the bug in auth.tsTrigger /commands in return using /command args syntax:
subtask: true
return:
- /revise-plan make the UX as horribly impractical as imaginable
- /implement-plan
- Send this to my mother in law
---
Design the auth system for $ARGUMENTSBy default, opencode injects a user message after a subtask: true completes, asking the model to "summarize the task tool output..." - Subtask2 replaces that message with the return prompt
- First
returnreplaces opencode's "summarize" message or fires as a follow-up - Any additional
returnfire sequentially after each LLM turn completes - accepts /commands - Commands (starting with
/) are executed as full commands with their ownparallelandreturn
Note: The first return of a subtask: true command cannot be a slash command as it subsitutes the opencode injected message (as a string)
Override the model for any command invocation without modifying the command file. Attach the override directly to the command name with no space:
/plan{model:anthropic/claude-sonnet-4} design auth systemreturn:
- /plan{model:github-copilot/claude-sonnet-4.5}
- /plan{model:openai/gpt-5.2}
- Compare both plans and pick the best approachThis lets you reuse a single command template with different models - no need to duplicate commands just to change the model.
Syntax: {model:provider/model-id} - must be attached directly to the command (no space).
Priority: inline {model:...} > frontmatter model: field
Spawn additional command subtasks alongside the main one:
plan.md
subtask: true
parallel:
- /plan-gemini
- /plan-opus
return:
- Compare and challenge the plans, keep the best bits and make a unified proposal
- Critically review the plan directly against what reddit has to say about it
---
Plan a trip to $ARGUMENTS.This runs 3 subtasks in parallel:
- The main command (
plan.md) plan-geminiplan-opus
When ALL complete, the main session receives the return prompt of the main command
You can pass arguments inline when using the command with || separators.
Pipe segments map in chronological order: main → parallels → return /commands
/mycommand main args || pipe1 || pipe2 || pipe3and or
parallel:
- command: research-docs
arguments: authentication flow
- command: research-codebase
arguments: auth middleware implementation
- /security-audit
return: Synthesize all findings into an implementation plan.research-docsgets "authentication flow" as$ARGUMENTSresearch-codebasegets "auth middleware implementation"security-auditinherits the main command's$ARGUMENTS
You can use /command args syntax for inline arguments:
parallel: /security-review focus on auth, /perf-review check db queriesOr for all commands to inherit the main $ARGUMENTS:
parallel: /research-docs, /research-codebase, /security-auditNote: Parallel commands are forced into subtasks regardless of their own subtask setting. Their return are ignored - only the parent's return applies. Nested parallels are automatically flattened (max depth: 5).
For subtask: true commands, this plugin replaces the opencode generic "summarize" message with the return prompt. If undefined and "replace_generic": true, subtask2 uses:
Review, challenge and validate the task output against the codebase then continue with the next logical step.
Configure in ~/.config/opencode/subtask2.jsonc:
Use $TURN[n] to inject the last N conversation turns (user + assistant messages) into your command. This is powerful for commands that need context from the ongoing conversation.
---
description: summarize our conversation so far
subtask: true
---
Review the following conversation and provide a concise summary:
$TURN[10]Syntax options:
$TURN[6]- last 6 messages$TURN[:3]- just the 3rd message from the end$TURN[:2:5:8]- specific messages at indices 2, 5, and 8$TURN[*]- all messages in the session
Usage in arguments:
/my-command analyze this $TURN[5]Syntax:
$TURN[12]- last 12 messages (turns, not parts)$TURN[:3]- just the 3rd message from the end$TURN[:2:5:8]- specific messages at indices 2, 5, and 8 from the end
Format output:
--- USER ---
What's the best way to implement auth?
--- ASSISTANT ---
I'd recommend using JWT tokens with...
--- USER ---
Can you show me an example?
...
Works in:
- Command body templates
- Command arguments
- Parallel command prompts
- Piped arguments (
||)
Some examples
Parallel subtask with different models (A/B/C plan comparison)
---
description: multi-model ensemble, 3 models plan in parallel, best ideas unified
model: github-copilot/claude-opus-4.5
subtask: true
parallel: /plan-gemini, /plan-gpt
return:
- Compare all 3 plans and validate each directly against the codebase. Pick the best ideas from each and create a unified implementation plan.
- /review-plan focus on simplicity and correctness
---
Plan the implementation for the following feature
> $ARGUMENTSIsolated "Plan" mode
---
description: two-step implementation planning and validation
agent: build
subtask: true
return:
- Challenge, verify and validate the plan by reviewing the codebase directly. Then approve, revise, or reject the plan. Implement if solid
- Take a step back, review what was done/planned for correctness, revise if needed
---
In this session you WILL ONLY PLAN AND NOT IMPLEMENT. You are to take the `USER INPUT` and research the codebase until you have gathered enough knowledge to elaborate a full fledged implementation plan
You MUST consider alternative paths and keep researching until you are confident you found the BEST possible implementation
BEST often means simple, lean, clean, low surface and coupling
Make it practical, maintainable and not overly abstracted
Follow your heart
> DO NOT OVERENGINEER SHIT
USER INPUT
$ARGUMENTSMulti-step workflow
---
description: design, implement, test, document
agent: build
model: github-copilot/claude-opus-4.5
subtask: true
return:
- Implement the component following the conceptual design specifications.
- Write comprehensive unit tests for all edge cases.
- Update the documentation and add usage examples.
- Run the test suite and fix any failures.
---
Conceptually design a React modal component with the following requirements
> $ARGUMENTSDemo files
Prompt used in the demo:
/subtask2 10 || pipe2 || pipe3 || pipe4 || pipe5
subtask2.md
---
description: subtask2 plugin test command
agent: build
subtask: true
parallel: /subtask2-parallel-test PARALLEL
return:
- say the phrase "THE RETURN PROMPT MADE ME SAY THIS" and do NOTHING else
- say the phrase "YOU CAN CHAIN PROMPTS, COMMANDS, OR SUBTASKS - NO LIMITS" and do NOTHING else
- /subtask2-nested-parallel CHAINED-COMMAND-SUBTASK
---
please count to $ARGUMENTSsubtask2-parallel-test.md
---
agent: plan
model: github-copilot/grok-code-fast-1
parallel: /subtask2-nested-parallel NESTED-PARALLEL
subtask: true
---
say the word "$ARGUMENTS" 3 timessubtask2-nested-parallel.md
---
agent: explore
model: github-copilot/gpt-4.1
subtask: true
return:
- say the phrase "COMPOSE AS SIMPLE OR COMPLEX A WORKFLOW AS YOU WANT" and do NOTHING else
- /subtask2-parallel-test LAST CALL
---
say the word "$ARGUMENTS" 3 timesInstallation
To install, add subtask2 to your opencode config plugin array{
"plugins": ["@openspoon/subtask2@latest"]
}


{ // Replace generic prompt when no 'return' is specified "replace_generic": true, // defaults to true // Custom fallback (optional - has built-in default) "generic_return": "custom return prompt" }