ORCH
One CLI to orchestrate them all. Manage a team of AI agents executing tasks in parallel from your terminal.
Install / Use
/learn @oxgeneral/ORCHQuality Score
Category
Development & EngineeringSupported Platforms
README
npm install -g @oxgeneral/orch # Install
cd ~/your-project && orch # Launch TUI
<br/>
<!-- Divider -->
<picture>
<source media="(prefers-color-scheme: dark)" srcset="./assets/divider-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="./assets/divider-light.svg">
<img alt="" src="./assets/divider-dark.svg" width="100%">
</picture>
<br/>
<div align="center">
<video src="https://github.com/user-attachments/assets/c7c3ab77-e718-4e5a-a8cf-bfc446ace64e" width="100%" controls autoplay loop muted></video>
</div>
<p align="center">
<em>Set a goal at 10pm. Five agents decompose, implement, test, and review. You wake up to pull requests.</em>
</p>
<br/>
<!-- Divider -->
<picture>
<source media="(prefers-color-scheme: dark)" srcset="./assets/divider-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="./assets/divider-light.svg">
<img alt="" src="./assets/divider-dark.svg" width="100%">
</picture>
<br/>
You hired AI agents. Now you're managing them full-time.
You bought Claude, Codex, maybe Cursor. Each one is powerful alone. But your actual job isn't "use AI tools" — it's ship a product at the speed of a full team, while being one person.
Here's what that looks like today:
- You open 3 terminals. Copy-paste context between them. Forget which agent is doing what.
- One agent edits a file another is working on. Merge conflict. You fix it manually.
- An agent crashes at 2am. You don't notice until morning. Half a night wasted.
- You spend 40-60% of your time routing agents instead of building your product.
You're not the founder. You're the bottleneck.
<br/>What if your agents coordinated themselves?
$ orch org deploy startup-mvp --goal "Implement user auth with OAuth2"
✓ Deployed team "platform" — 5 agents
CTO (claude) → Decomposing goal into tasks...
Backend A (claude) → Waiting for tasks
Backend B (codex) → Waiting for tasks
QA (codex) → Waiting for tasks
Reviewer (claude) → Waiting for reviews
✓ CTO created 6 tasks from goal
$ orch run --all --watch
22:03 ▶ Backend A → "Implement OAuth2 flow" [feature/oauth]
22:03 ▶ Backend B → "JWT token service" [feature/jwt]
22:03 ▶ QA → waiting for implementations...
22:15 ✓ Backend B DONE (12m · 4,200 tokens)
22:15 ▶ QA → "Test JWT service" [test/jwt]
22:22 ✓ Backend A DONE (19m · 8,100 tokens)
22:24 ↻ QA RETRY attempt 2/3
22:28 ✓ QA DONE (6m · 2,800 tokens)
22:29 ▶ Reviewer → "Review OAuth2 implementation"
22:33 ✓ Reviewer DONE → all tasks in review
→ You went to sleep at 22:05.
→ You wake up to 6 tasks in review. Approve. Merge. Ship.
<p align="center"><strong>One goal. Five agents. Six PRs. Zero tab-switching. $4.20 in tokens.</strong></p>
<br/>
<!-- Divider -->
<picture>
<source media="(prefers-color-scheme: dark)" srcset="./assets/divider-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="./assets/divider-light.svg">
<img alt="" src="./assets/divider-dark.svg" width="100%">
</picture>
<br/>
Start coordinating agents in 30 seconds
<!-- Install Card --> <picture> <source media="(prefers-color-scheme: dark)" srcset="./assets/install-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="./assets/install-light.svg"> <img alt="Install ORCH" src="./assets/install-dark.svg" width="100%"> </picture> <br/>That's it. ORCH auto-initializes and opens the TUI dashboard. Add agents, set goals, and run — right from there.
Claude Code integration
After install, the /orch skill is automatically available in Claude Code. Just type /orch and describe what you need in natural language:
/orch deploy a team to refactor the auth module and add tests
Claude will translate your intent into the right orch commands — create agents, tasks, goals, and run the orchestration. No need to memorize CLI flags.
Or deploy a pre-built team:
orch org deploy startup-mvp --goal "Build invoicing SaaS with Stripe"
orch run --all --watch
System requirements
<table> <tr> <td width="50%" valign="top">Minimum 1-2 agents
| | | |---|---| | OS | macOS, Linux, WSL2 | | CPU | 2 cores | | RAM | 4 GB | | Disk | 300 MB | | Node.js | >= 20 |
</td> <td width="50%" valign="top">Recommended — full department 4-6 agents
| | | |---|---| | OS | macOS, Linux, WSL2 | | CPU | 4+ cores | | RAM | 8 GB | | Disk | 1 GB | | Node.js | >= 20 |
</td> </tr> </table> <p align="center">No database. No cloud. No Docker. No GPU — LLMs run via API, not locally.</p>Your code is safe
<details> <summary><strong>Why does each agent need ~300 MB?</strong></summary> <br/>Every agent works in an isolated git worktree. Your
mainbranch is never touched until you explicitly approve and merge. Mandatory review step in the state machine — no code ships without your OK. Agents can't overwrite each other's work.
ORCH itself is lightweight (~120 MB). The RAM goes to the agent CLI processes that ORCH spawns — each is a separate Node.js/Python runtime:
| Agent process | RAM per instance | Why | |---------------|-----------------|-----| | Claude Code CLI | 200-400 MB | Full Node.js runtime + context window | | OpenCode | 200-400 MB | Node.js + provider SDK | | Codex CLI | 150-300 MB | Python runtime + OpenAI SDK | | Cursor CLI | 200-400 MB | Electron-based agent | | Shell scripts | 10-50 MB | Depends on the tool |
Formula: 120 MB (ORCH) + N × ~300 MB per concurrent agent.
2 agents ≈ 0.7 GB, 4 agents ≈ 1.3 GB, 6 agents ≈ 2 GB.
How your AI team works
<table> <tr> <td width="50%" valign="top">CTO — strategic decomposition
Set a high-level goal. Your CTO agent decomposes it into concrete tasks, assigns priorities, and delegates to the right departments. You set strategy — AI executes.
Engineering Department — parallel execution
Backend A, Backend B, Frontend — each agent gets its own git worktree (isolated branch). They work in parallel without file conflicts. Failed? Auto-retry with exponential backoff. Stalled? Zombie detection kills and re-queues.
</td> <td width="50%" valign="top">QA Department — automated verification
QA agents pick up completed work, run tests, validate contracts. Reject with feedback → task goes back to engineering with your notes. The loop closes automatically.
Inter-department communication
Agents talk to each other — direct messages, team broadcasts, shared context store. Backend finishes auth module → sends message to QA → QA starts testing. No copy-paste. No manual routing.
</td> </tr> </table>Code Review — mandatory quality gate
Nothing touches main until reviewed. Every task flows through the state machine:
Every transition validated. No task gets lost. No code merges without approval.
<br/> <!-- Divider --> <picture> <source media="(prefers-color-scheme: dark)" srcset="./assets/divider-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="./assets/divider-light.svg"> <img alt="" src="./assets/divider-dark.svg" width="100Related Skills
node-connect
345.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
104.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
Writing Hookify Rules
104.6kThis skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
review-duplication
100.0kUse this skill during code reviews to proactively investigate the codebase for duplicated functionality, reinvented wheels, or failure to reuse existing project best practices and shared utilities.
