Skip to content

spoons-and-mirrors/subtask2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Extend opencode /commands into a powerful orchestration system

citation

TL:DR - A less entropic agentic loop with more user flow control

This plugin allows your /commands to:

  • Queue-up prompts or other /commands and subagents with arguments
  • Parallelize the parts you want
  • Pass session context to subagents
  • Steer the agentic flow from start to finish

If you already know opencode commands, you'll be right at home.

subtask2 header @openspoon/subtask2@latest

Key features

  • return instruct main session on command/subtask(s) result - can be chained, supports /commands
  • parallel run subtasks concurrently - pending PR merge ⚠️
  • arguments pass arguments with command frontmatter or || message pipe
  • $TURN[n] syntax allows to pipe session context into your command

Requires this PR for parallel features, as well as proper model inheritance (piping the right model and agent to the right subtask and back) to work.


Feature documentation

1. return - Or the old 'look again' trick

Use return to tell the main agent what to do after a command completes, supports chaining and triggering other commands. The return prompt is appended to the main session on command or subtask completion.

subtask: true
return: Look again, challenge the findings, then implement the valid fixes.
---
Review the PR# $ARGUMENTS for bugs.

For multiple sequential prompts, use an array:

subtask: true
return:
  - Implement the fix
  - Run the tests
---
Find the bug in auth.ts

Trigger /commands in return using /command args syntax:

subtask: true
return:
  - /revise-plan make the UX as horribly impractical as imaginable
  - /implement-plan
  - Send this to my mother in law
---
Design the auth system for $ARGUMENTS

By default, opencode injects a user message after a subtask: true completes, asking the model to "summarize the task tool output..." - Subtask2 replaces that message with the return prompt

  • First return replaces opencode's "summarize" message or fires as a follow-up
  • Any additional return fire sequentially after each LLM turn completes - accepts /commands
  • Commands (starting with /) are executed as full commands with their own parallel and return

Note: The first return of a subtask: true command cannot be a slash command as it subsitutes the opencode injected message (as a string)

2. {model:...} - Inline model override ⚠️ PENDING PR

Override the model for any command invocation without modifying the command file. Attach the override directly to the command name with no space:

/plan{model:anthropic/claude-sonnet-4} design auth system
return:
  - /plan{model:github-copilot/claude-sonnet-4.5}
  - /plan{model:openai/gpt-5.2}
  - Compare both plans and pick the best approach

This lets you reuse a single command template with different models - no need to duplicate commands just to change the model.

Syntax: {model:provider/model-id} - must be attached directly to the command (no space).

Priority: inline {model:...} > frontmatter model: field

3. parallel - Run multiple subtasks concurrently ⚠️ PENDING PR

Spawn additional command subtasks alongside the main one:

plan.md

subtask: true
parallel:
  - /plan-gemini
  - /plan-opus
return:
  - Compare and challenge the plans, keep the best bits and make a unified proposal
  - Critically review the plan directly against what reddit has to say about it
---
Plan a trip to $ARGUMENTS.

This runs 3 subtasks in parallel:

  1. The main command (plan.md)
  2. plan-gemini
  3. plan-opus

When ALL complete, the main session receives the return prompt of the main command

With custom arguments per command

You can pass arguments inline when using the command with || separators. Pipe segments map in chronological order: main → parallels → return /commands

/mycommand main args || pipe1 || pipe2 || pipe3

and or

parallel:
  - command: research-docs
    arguments: authentication flow
  - command: research-codebase
    arguments: auth middleware implementation
  - /security-audit
return: Synthesize all findings into an implementation plan.
  • research-docs gets "authentication flow" as $ARGUMENTS
  • research-codebase gets "auth middleware implementation"
  • security-audit inherits the main command's $ARGUMENTS

You can use /command args syntax for inline arguments:

parallel: /security-review focus on auth, /perf-review check db queries

Or for all commands to inherit the main $ARGUMENTS:

parallel: /research-docs, /research-codebase, /security-audit

Note: Parallel commands are forced into subtasks regardless of their own subtask setting. Their return are ignored - only the parent's return applies. Nested parallels are automatically flattened (max depth: 5).

Priority: pipe args > frontmatter args > inherit main args

4. Subtask return fallback and custom defaults

For subtask: true commands, this plugin replaces the opencode generic "summarize" message with the return prompt. If undefined and "replace_generic": true, subtask2 uses:

Review, challenge and validate the task output against the codebase then continue with the next logical step.

Configure in ~/.config/opencode/subtask2.jsonc:

{
  // Replace generic prompt when no 'return' is specified
  "replace_generic": true, // defaults to true

  // Custom fallback (optional - has built-in default)
  "generic_return": "custom return prompt"
}

Priority: return param > config generic_return > built-in default > opencode original

5. $TURN[n] - Reference previous conversation turns

Use $TURN[n] to inject the last N conversation turns (user + assistant messages) into your command. This is powerful for commands that need context from the ongoing conversation.

---
description: summarize our conversation so far
subtask: true
---
Review the following conversation and provide a concise summary:

$TURN[10]

Syntax options:

  • $TURN[6] - last 6 messages
  • $TURN[:3] - just the 3rd message from the end
  • $TURN[:2:5:8] - specific messages at indices 2, 5, and 8
  • $TURN[*] - all messages in the session

Usage in arguments:

/my-command analyze this $TURN[5]

Syntax:

  • $TURN[12] - last 12 messages (turns, not parts)
  • $TURN[:3] - just the 3rd message from the end
  • $TURN[:2:5:8] - specific messages at indices 2, 5, and 8 from the end

Format output:

--- USER ---
What's the best way to implement auth?

--- ASSISTANT ---
I'd recommend using JWT tokens with...

--- USER ---
Can you show me an example?
...

Works in:

  • Command body templates
  • Command arguments
  • Parallel command prompts
  • Piped arguments (||)
Some examples

Parallel subtask with different models (A/B/C plan comparison)

---
description: multi-model ensemble, 3 models plan in parallel, best ideas unified
model: github-copilot/claude-opus-4.5
subtask: true
parallel: /plan-gemini, /plan-gpt
return:
  - Compare all 3 plans and validate each directly against the codebase. Pick the best ideas from each and create a unified implementation plan.
  - /review-plan focus on simplicity and correctness
---
Plan the implementation for the following feature
> $ARGUMENTS

Isolated "Plan" mode

---
description: two-step implementation planning and validation
agent: build
subtask: true
return:
  - Challenge, verify and validate the plan by reviewing the codebase directly. Then approve, revise, or reject the plan. Implement if solid
  - Take a step back, review what was done/planned for correctness, revise if needed
---
In this session you WILL ONLY PLAN AND NOT IMPLEMENT. You are to take the `USER INPUT` and research the codebase until you have gathered enough knowledge to elaborate a full fledged implementation plan

You MUST consider alternative paths and keep researching until you are confident you found the BEST possible implementation

BEST often means simple, lean, clean, low surface and coupling
Make it practical, maintainable and not overly abstracted

Follow your heart
> DO NOT OVERENGINEER SHIT

USER INPUT
$ARGUMENTS

Multi-step workflow

---
description: design, implement, test, document
agent: build
model: github-copilot/claude-opus-4.5
subtask: true
return:
  - Implement the component following the conceptual design specifications.
  - Write comprehensive unit tests for all edge cases.
  - Update the documentation and add usage examples.
  - Run the test suite and fix any failures.
---
Conceptually design a React modal component with the following requirements
> $ARGUMENTS
Demo files

Prompt used in the demo: /subtask2 10 || pipe2 || pipe3 || pipe4 || pipe5

subtask2.md

---
description: subtask2 plugin test command
agent: build
subtask: true
parallel: /subtask2-parallel-test PARALLEL
return:
  - say the phrase "THE RETURN PROMPT MADE ME SAY THIS" and do NOTHING else
  - say the phrase "YOU CAN CHAIN PROMPTS, COMMANDS, OR SUBTASKS - NO LIMITS" and do NOTHING else
  - /subtask2-nested-parallel CHAINED-COMMAND-SUBTASK
---
please count to $ARGUMENTS

subtask2-parallel-test.md

---
agent: plan
model: github-copilot/grok-code-fast-1
parallel: /subtask2-nested-parallel NESTED-PARALLEL
subtask: true
---
say the word "$ARGUMENTS" 3 times

subtask2-nested-parallel.md

---
agent: explore
model: github-copilot/gpt-4.1
subtask: true
return:
  - say the phrase "COMPOSE AS SIMPLE OR COMPLEX A WORKFLOW AS YOU WANT" and do NOTHING else
  - /subtask2-parallel-test LAST CALL
---
say the word "$ARGUMENTS" 3 times
Installation To install, add subtask2 to your opencode config plugin array
{
  "plugins": ["@openspoon/subtask2@latest"]
}

Watch demo

About

A stronger opencode /command handler

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published