Subtask2
A stronger opencode /command handler
Install / Use
/learn @spoons-and-mirrors/Subtask2README
A better opencode /command handler

TL:DR - Lower session entropy with a more deterministic agentic loop
This plugin allows your opencode /commands to:
- Chain
prompts,/commandsandsubagentsseamlessly - Relay subagent results or session context to other subagents
- Loop or parallelize subagents
- Run commands on the fly with the
/subtaskcommand - Override
/commandsparameters inline (model, agent, return, parallel...)
If you already use opencode /commands, you'll be right at home, if not, start with this page

To install, add subtask2 to your opencode configuration
{
"plugins": ["@spoons-and-mirrors/subtask2@latest"]
}
Key Features
returninstruct main session on command/subtask(s) resultlooploop subtask until user condition is metparallelrun subtasks concurrently - pending PR$TURN[n]pass session turns (user/assistant messages){as:name}+$RESULT[name]capture and reference subtask outputs- Inline syntax for model, agent, and ad-hoc subtasks
Requires this PR for the parallel feature
<details> <summary><strong>1. <code>return</code> - Chaining prompts and commands</strong></summary>
1. return - Chaining prompts and commands
Use return to tell the main agent what to do after a command completes. Supports prompts, /commands, and chaining.
subtask: true
return: Look again, challenge the findings, then implement the valid fixes.
---
Review the PR# $ARGUMENTS for bugs.
For multiple sequential prompts, use an array:
subtask: true
return:
- Implement the fix
- Run the tests
---
Find the bug in auth.ts
Trigger /commands in return
subtask: true
return:
- /revise-plan make the UX as horribly impractical as imaginable
- /implement-plan
- Send this to my mother in law
---
Design the auth system for $ARGUMENTS
How return prompts work:
When a subtask: true completes, OpenCode normally injects a hidden synthetic user message asking the model to "summarize the task tool output..." - Subtask2 completely removes this message and handles returns differently:
- Prompt returns: Fired as real user messages visible in your conversation. You'll see the return prompt appear as if you typed it.
- Command returns (starting with
/): The command executes immediately.
This gives you full visibility into what's driving the agent's next action.
/commands are executed as full commands with their own parallel and return
2. loop - Repeat until condition is met
Run a command repeatedly, either a fixed number of times or until a condition is satisfied.
Unconditional loop (fixed iterations):
/generate-tests {loop:5} generate unit tests for auth module
Runs exactly 5 times with no evaluation - the main session just yields between iterations.
Conditional loop (with evaluation):
/fix-tests {loop:10 && until:all tests pass with good coverage}
Frontmatter:
---
loop:
max: 10
until: "all features implemented correctly"
---
Implement the auth system.
In return chains:
return:
- /implement-feature
- /fix-tests {loop:5 && until:tests are green}
- /commit
How it works (orchestrator-decides pattern):
- Subtask runs and completes
- Main session receives evaluation prompt with the condition
- Main LLM evaluates: reads files, checks git, runs tests if needed
- Responds with
<subtask2 loop="break"/>(satisfied) or<subtask2 loop="continue"/>(more work needed) - If continue → loop again. If break → proceed to next step
- Max iterations is a safety net
Why this works:
- The main session (orchestrator) has full context of what was done
- It can verify by reading actual files, git diff, test output
- No fake "DONE" markers - real evaluation of real conditions
- The
until:is a human-readable condition, not a magic keyword
Best practices:
- Write clear conditions:
until: "tests pass"notuntil: "DONE" - Always set a reasonable
maxas a safety net - The condition is shown to the evaluating LLM verbatim
Priority: inline {loop:...} > frontmatter loop:
3. parallel - Run subtasks concurrently
Spawn additional command subtasks alongside the main one:
plan.md
subtask: true
parallel:
- /plan-gemini
- /plan-opus
return:
- Compare and challenge the plans, keep the best bits and make a unified proposal
- Critically review the plan directly against what reddit has to say about it
---
Plan a trip to $ARGUMENTS.
This runs 3 subtasks in parallel:
- The main command (
plan.md) plan-geminiplan-opus
When ALL complete, the main session receives the return prompt of the main command
With custom arguments per command
You can pass arguments inline when using the command with || separators.
Pipe segments map in chronological order: main → parallels → return /commands
/mycommand main args || pipe1 || pipe2 || pipe3
and/or
parallel:
- command: research-docs
arguments: authentication flow
- command: research-codebase
arguments: auth middleware implementation
- /security-audit
return: Synthesize all findings into an implementation plan.
research-docsgets "authentication flow" as$ARGUMENTSresearch-codebasegets "auth middleware implementation"security-auditinherits the main command's$ARGUMENTS
You can use /command args syntax for inline arguments:
parallel: /security-review focus on auth, /perf-review check db queries
Or for all commands to inherit the main $ARGUMENTS:
parallel: /research-docs, /research-codebase, /security-audit
Note: Parallel commands are forced into subtasks regardless of their own subtask setting. Their return are ignored - only the parent's return applies. Nested parallels are automatically flattened with a maximum depth of 5 to prevent infinite recursion.
Priority: pipe args > frontmatter args > inherit main args
</details> <details> <summary><strong>4. Context & Results - <code>$TURN</code>, <code>{as:name}</code>, <code>$RESULT</code></strong></summary>4. Context & Results
Pass conversation context to subtasks and capture their outputs for later use.
$TURN[n] - Reference previous conversation turns
Use $TURN[n] to inject the last N conversation turns (user + assistant messages) into your command. This is powerful for commands that need context from the ongoing conversation.
---
description: summarize our conversation so far
subtask: true
---
Review the following conversation and provide a concise summary:
$TURN[10]
Syntax options:
$TURN[6]- last 6 messages$TURN[:3]- just the 3rd message from the end$TURN[:2:5:8]- specific messages at indices 2, 5, and 8$TURN[*]- all messages in the session
Usage in arguments:
/my-command analyze this $TURN[5]
Format output:
--- USER ---
What's the best way to implement auth?
--- ASSISTANT ---
I'd recommend using JWT tokens with...
--- USER ---
Can you show me an example?
...
Works in:
- Command body templates
- Command arguments
- Parallel command prompts
- Piped arguments (
||)
{as:name} and $RESULT[name] - Named results
Capture command outputs and reference them later in return chains. Works with any command type - subtasks, parallel commands, inline subtasks, and even regular non-subtask commands.
Multi-model comparison with named results:
subtask: true
parallel:
- /plan {model:anthropic/claude-sonnet-4 && as:claude-plan}
- /plan {model:openai/gpt-4o && as:gpt-plan}
return:
- /deep-analysis {as:analysis}
- "Compare $RESULT[claude-plan] vs $RESULT[gpt-plan] using insights from $RESULT[analysis]"
This runs two planning subtasks with different models, then a deep analysis, then compares all three results in the final return.
In return chains:
return:
- /research {as:research}
- /design {as:design}
- "Implement based on $RESULT[research] and $RESULT[design]"
With inline subtasks:
return:
- /subtask {model:openai/gpt-4o && as:gpt-take} analyze the auth flow
- /subtask {model:anthropic/claude-sonnet-4 && as:claude-take} analyze the auth flow
- "Synthesize $RESULT[gpt-take] and $RESULT[claude-take] into a unified analysis"
Syntax: {as:name} - can be combined with other overrides using &&.
How it works:
- When a subtask with
as:namecompletes, its final output is captured - The result is stored and associated with the parent session
- When processing return prompts,
$RESULT[name]is replaced with the captured output - If a result isn't found, it's replaced with
[Result 'name' not found]
5. Inline Syntax
Override command parameters or create subtasks on the fly without modifying command files.
{model:...} - Model override
Override the model for any command invocation:
/plan {model:anthropic/claude-sonnet-4} design auth system
return:
- /plan {model:github-copilot/claude-sonnet-4.5}
- /plan {model:openai/gpt-5.2}
- Compare both plans and pick the best approach
This lets you reuse a single command template with different models - no need to duplicate commands just to change the model.
{agent:...} - Agent override
Override the agent for any command invocation:
Related Skills
node-connect
343.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
90.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
343.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
343.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
