SkillAgentSearch skills...

Mco

Orchestrate AI coding agents. Any prompt. Any agent. Any IDE. Neutral orchestration layer for Claude Code, Codex CLI, Gemini CLI, OpenCode, Qwen Code — works from Cursor, Trae, Copilot, Windsurf, or plain shell.

Install / Use

/learn @mco-org/Mco
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Claude Desktop
GitHub Copilot
Cursor
Windsurf
Gemini CLI
OpenAI Codex

README

<h1 align="center">MCO</h1> <p align="center"> <img src="./docs/assets/logos/mco-logo-readme.svg" alt="MCO Logo" width="520" /> </p> <p align="center"> <a href="https://www.npmjs.com/package/@tt-a1i/mco"><img src="https://img.shields.io/npm/v/@tt-a1i/mco?style=flat-square&color=cb3837&logo=npm&logoColor=white" alt="npm version" /></a> <a href="https://www.npmjs.com/package/@tt-a1i/mco"><img src="https://img.shields.io/npm/dm/@tt-a1i/mco?style=flat-square&color=cb3837" alt="npm downloads" /></a> <a href="https://github.com/mco-org/mco/stargazers"><img src="https://img.shields.io/github/stars/mco-org/mco?style=flat-square&color=f59e0b" alt="GitHub stars" /></a> <a href="./LICENSE"><img src="https://img.shields.io/badge/License-MIT-22c55e?style=flat-square" alt="License: MIT" /></a> <img src="https://img.shields.io/badge/Python-3.10%2B-3776AB?style=flat-square&logo=python&logoColor=white" alt="Python 3.10+" /> <img src="https://img.shields.io/badge/Providers-5%20built--in-7c3aed?style=flat-square" alt="5 built-in providers" /> <a href="https://pypi.org/project/evermemos-mcp/"><img src="https://img.shields.io/badge/evermemos--mcp-memory%20powered-6366f1?style=flat-square" alt="evermemos-mcp" /></a> </p> <p align="center"><strong>MCO — Orchestrate AI Coding Agents. Any Prompt. Any Agent. Any IDE.</strong></p> <p align="center"><strong>MCO equips your primary agent with an agent team: dispatch Claude, Codex, Gemini, OpenCode, and Qwen in parallel to execute tasks, review outputs, and synthesize consensus.</strong></p> <p align="center">English | <a href="./README.zh-CN.md">简体中文</a></p> <table align="center"> <tr> <td align="center"><a href="https://github.com/anthropics/claude-code"><img src="https://github.com/anthropics.png?size=96" alt="Claude Code" width="48" /></a></td> <td align="center"><a href="https://github.com/google-gemini/gemini-cli"><img src="https://github.com/google-gemini.png?size=96" alt="Gemini CLI" width="48" /></a></td> <td align="center"><a href="https://github.com/openai/codex"><img src="https://github.com/openai.png?size=96" alt="Codex CLI" width="48" /></a></td> <td align="center"><a href="https://github.com/sst/opencode"><img src="https://raw.githubusercontent.com/sst/opencode/master/packages/console/app/src/asset/brand/opencode-logo-light-square.svg" alt="OpenCode" width="48" /></a></td> <td align="center"><a href="https://github.com/QwenLM/qwen-code"><img src="https://github.com/QwenLM.png?size=96" alt="Qwen Code" width="48" /></a></td> <td align="center"><a href="https://github.com/open-claw/open-claw"><img src="https://cdn.jsdelivr.net/gh/twitter/twemoji@latest/assets/svg/1f99e.svg" alt="OpenClaw" width="48" /></a></td> </tr> <tr> <td align="center"><strong>Claude Code</strong></td> <td align="center"><strong>Gemini CLI</strong></td> <td align="center"><strong>Codex CLI</strong></td> <td align="center"><strong>OpenCode</strong></td> <td align="center"><strong>Qwen Code</strong></td> <td align="center"><strong>OpenClaw 🦞</strong></td> </tr> <tr> <td align="center"><code>claude</code></td> <td align="center"><code>gemini</code></td> <td align="center"><code>codex</code></td> <td align="center"><code>opencode</code></td> <td align="center"><code>qwen</code></td> <td align="center"><code>openclaw</code></td> </tr> </table>

AI coding agents are now standard tools for every developer. But one agent is just one perspective.

Work like a Tech Lead: assign one task to multiple agents, run in parallel, and compare outcomes before acting.

One command. Five agents working at once.

Works with OpenClaw

Running OpenClaw on your machine? It can use MCO as its multi-agent backbone. Just tell OpenClaw what you need:

"Use mco to run a security review on this repo with Claude, Codex, and Gemini. Synthesize the results."

OpenClaw reads mco -h, learns the CLI, and orchestrates the entire workflow autonomously. Your local machine becomes a multi-agent review team — OpenClaw is the manager, MCO is the dispatcher, and Claude/Codex/Gemini/OpenCode/Qwen are the team members.

This works the same way from Claude Code, Cursor, Trae, Copilot, Windsurf, or any agent that can run shell commands.

Demo video (Bilibili): 给 OpenClaw 装上兵权:组建你自己的 AI 军团

What is MCO

MCO (Multi-CLI Orchestrator) is a neutral orchestration layer for AI coding agents. It dispatches prompts to multiple agent CLIs in parallel, aggregates results, and returns structured output — JSON, SARIF, or PR-ready Markdown. No vendor lock-in. No workflow rewrite.

With the rise of agentic coding — led by projects like OpenClaw and the broad availability of Claude Code, Codex CLI, Gemini CLI, and more — every developer now has access to powerful AI agents. MCO takes the next step: instead of relying on a single agent, you orchestrate a team.

MCO is designed to be called by any orchestrating agent or AI-powered IDE — Claude Code, Cursor, Trae, Copilot, Windsurf, or OpenClaw. The calling agent organizes context, assigns tasks, and uses MCO to fan out work across multiple agents simultaneously. For example, OpenClaw running on your machine can call mco review to dispatch code reviews to Claude, Codex, and Gemini in parallel — turning your local setup into a multi-agent review team with a single command. Agents can also orchestrate each other: Claude Code can dispatch tasks to Codex and Gemini via MCO, and vice versa.

One Agent is a Tool. Five Agents are a Team.

No single AI model sees everything. Each model has its own training data, reasoning style, and blind spots. Using just one agent is like having a team of five engineers and only asking one for their opinion.

MCO turns this into a team workflow:

  1. Assign — You give MCO a task and a list of agents. Like a Tech Lead assigning the same code review to five team members.
  2. Execute in parallel — All agents work simultaneously. Wall-clock time ≈ the slowest agent, not the sum.
  3. Review and deduplicate — MCO collects each agent's findings, deduplicates identical issues across agents, and tracks which agents found what (detected_by).
  4. Synthesize consensus — Optionally, one agent summarizes the combined results: what everyone agrees on, where they diverge, and what to do next.

In practice, different agents catch different things:

  • One agent spots a race condition in your async code but overlooks an SQL injection in the ORM layer.
  • Another finds the injection immediately but misses the race condition entirely.
  • A third catches neither of those but flags a subtle memory leak in the resource cleanup path.

These aren't hypothetical — different models genuinely have different strengths. Some are better at security analysis, some at logic flow, some at performance patterns. By running 3–5 agents in parallel on the same codebase, you get a union of perspectives rather than the intersection. The result is a more thorough review than any single agent could produce, regardless of which one you pick.

This principle extends beyond code review:

  • Architecture analysis — different agents surface different design risks and trade-offs
  • Bug hunting — broader coverage across code paths and edge cases
  • Refactoring assessment — multiple perspectives on impact and safety of proposed changes

The question isn't "which AI agent is best" — it's "why limit yourself to one?"

Key Highlights

  • Parallel fan-out — dispatch to multiple agents simultaneously, wait-all semantics
  • Any IDE, any agent — works from Claude Code, Cursor, Trae, Copilot, Windsurf, or plain shell
  • Agent-to-agent orchestration — agents can dispatch tasks to other agents through MCO
  • Dual modemco review for structured code review findings, mco run for general task execution
  • Cross-agent deduplication — identical findings from multiple agents are merged automatically with detected_by provenance
  • Consensus engine — merged findings get consensus_score = agreement_ratio × max_confidence plus confirmed / needs-verification / unverified consensus levels
  • Cross-session memory--memory flag persists findings and agent scores via evermemos-mcp, building institutional knowledge across runs
  • LLM synthesis--synthesize runs an extra pass to produce consensus/divergence summary across all agents
  • Live terminal streaming--stream live renders rich real-time terminal progress; --stream jsonl remains available for machine consumers
  • Debate mode--debate adds a second challenge round where agents critique the merged findings before final ranking
  • Divide mode--divide files|dimensions splits review work by file slices or review dimensions while preserving the existing merge + consensus pipeline
  • CI/CD integration--format sarif for GitHub Code Scanning, --format markdown-pr for PR comments
  • Environment health checkmco doctor probes binary presence, version, and auth status for all providers
  • Token usage tracking--include-token-usage for best-effort per-agent and aggregate token consumption
  • Progress-driven timeouts — agents run freely until completion; cancel only when output goes idle
  • Stateful sessionsmco session for persistent multi-turn conversations with prompt queue and cancellation
  • ACP transport--transport acp for structured JSON-RPC communication via the Agent Client Protocol
  • Custom ACP agents--agent NAME COMMAND to register any ACP-compatible binary as a provider
  • Custom agent registry.mco/agents.yaml, .mcorc.yaml, or ~/.mco/agents.yaml can register shim, ACP, or Ollama-backed agents; inspect them with mco agent list / `mco a
View on GitHub
GitHub Stars254
CategoryDevelopment
Updated1h ago
Forks23

Languages

Python

Security Score

100/100

Audited on Mar 28, 2026

No findings