Cccc
Coordinate your coding agents like a group chat — read receipts, delivery tracking, and remote ops from your phone. One pip install, zero infrastructure. A production‑minded orchestrator for 24/7 workflow
Install / Use
/learn @ChesterRa/CcccQuality Score
Category
Development & EngineeringSupported Platforms
README
CCCC
Local-first Multi-agent Collaboration Kernel
A lightweight multi-agent framework with infrastructure-grade reliability.
Chat-native, prompt-driven, and bi-directional by design.
Run multiple coding agents as a durable, coordinated system — not a pile of disconnected terminal sessions.
Three commands to go. Zero infrastructure, production-grade power.
</div>Why CCCC
- Durable coordination: working state lives in an append-only ledger, not in terminal scrollback.
- Visible delivery semantics: messages have routing, read, ack, and reply-required tracking instead of best-effort prompting.
- One control plane: Web UI, CLI, MCP, and IM bridges all operate on the same daemon-owned state.
- Multi-runtime by default: Claude Code, Codex CLI, Gemini CLI, and the rest of the first-class runtimes can collaborate in one group.
- Local-first operations: one
pip install, runtime state inCCCC_HOME, and remote supervision only when you choose to expose it.
The Problem
Using multiple coding agents today usually means:
- Lost context — coordination lives in terminal scrollback and disappears on restart
- No delivery guarantees — did the agent actually read your message?
- Fragmented ops — start/stop/recover/escalate across separate tools
- No remote access — checking on a long-running group from your phone is not an option
These aren't minor inconveniences. They're the reason most multi-agent setups stay fragile demos instead of reliable workflows.
What CCCC Does
CCCC is a single pip install with zero external dependencies — no database, no message broker, no Docker required. Yet it gives you the pieces fragile multi-agent setups usually lack:
| Capability | How |
|---|---|
| Single source of truth | Append-only ledger (ledger.jsonl) records every message and event — replayable, auditable, never lost |
| Reliable messaging | Read cursors, attention ACK, and reply-required obligations — you know exactly who saw what |
| Unified control plane | Web UI, CLI, MCP tools, and IM bridges all talk to one daemon — no state fragmentation |
| Multi-runtime orchestration | Claude Code, Codex CLI, Gemini CLI, and 5 more first-class runtimes, plus custom for everything else |
| Role-based coordination | Foreman + peer model with permission boundaries and recipient routing (@all, @peers, @foreman) |
| Local-first runtime state | Runtime data stays in CCCC_HOME, not your repo, while Web Access and IM bridges cover remote operations |
How CCCC looks
<div align="center"> <video src="https://github.com/user-attachments/assets/460b6719-428b-4c1c-8879-0ebf8b8cee4f" controls="controls" muted="muted" autoplay="autoplay" loop="loop" style="max-width: 100%;"> </video> </div>Quick Start
Install
# Stable channel (PyPI)
pip install -U cccc-pair
# RC channel (TestPyPI)
pip install -U --pre \
--index-url https://test.pypi.org/simple/ \
--extra-index-url https://pypi.org/simple/ \
cccc-pair
Requirements: Python 3.9+, macOS / Linux / Windows
Launch
cccc
Open http://127.0.0.1:8848 — by default, CCCC brings up the daemon and the local Web UI together.
Create a multi-agent group
cd /path/to/your/repo
cccc attach . # bind this directory as a scope
cccc setup --runtime claude # configure MCP for your runtime
cccc actor add foreman --runtime claude # first actor becomes foreman
cccc actor add reviewer --runtime codex # add a peer
cccc group start # start all actors
cccc send "Split the task and begin." --to @all
You now have two agents collaborating in a persistent group with full message history, delivery tracking, and a web dashboard. The daemon owns delivery and coordination, and runtime state stays in CCCC_HOME rather than inside your repo.
Programmatic Access (SDK)
Use the official SDK when you need to integrate CCCC into external applications or services:
pip install -U cccc-sdk
npm install cccc-sdk
The SDK does not include a daemon. It connects to a running cccc core instance.
Architecture
graph TB
subgraph Agents["Agent Runtimes"]
direction LR
A1["Claude Code"]
A2["Codex CLI"]
A3["Gemini CLI"]
A4["+ 5 more + custom"]
end
subgraph Daemon["CCCC Daemon · single writer"]
direction LR
Ledger[("Ledger<br/>append-only JSONL")]
ActorMgr["Actor<br/>Manager"]
Auto["Automation<br/>Rules · Nudge · Cron"]
Ledger ~~~ ActorMgr ~~~ Auto
end
subgraph Ports["Control Plane"]
direction LR
Web["Web UI<br/>:8848"]
CLI["CLI"]
MCP["MCP<br/>(stdio)"]
end
subgraph IM["IM Bridges"]
direction LR
TG["Telegram"]
SL["Slack"]
DC["Discord"]
FS["Feishu"]
DT["DingTalk"]
end
Agents <-->|MCP tools| Daemon
Daemon <--> Ports
Web <--> IM
Key design decisions:
- Daemon is the single writer — all state changes go through one process, eliminating race conditions
- Ledger is append-only — events are never mutated, making history reliable and debuggable
- Ports are thin — Web, CLI, MCP, and IM bridges are stateless frontends; the daemon owns all truth
- Runtime home is
CCCC_HOME(default~/.cccc/) — runtime state stays out of your repo
Supported Runtimes
CCCC orchestrates agents across 8 first-class runtimes, with custom available for everything else. Each actor in a group can use a different runtime.
| Runtime | Auto MCP Setup | Command |
|---------|:--------------:|---------|
| Claude Code | ✅ | claude |
| Codex CLI | ✅ | codex |
| Gemini CLI | ✅ | gemini |
| Droid | ✅ | droid |
| Amp | ✅ | amp |
| Auggie | ✅ | auggie |
| Kimi CLI | ✅ | kimi |
| Neovate | ✅ | neovate |
| Custom | — | Any command |
cccc setup --runtime claude # auto-configures MCP for this runtime
cccc runtime list --all # show all available runtimes
cccc doctor # verify environment and runtime availability
Messaging & Coordination
CCCC implements IM-grade messaging semantics, not just "paste text into a terminal":
- Recipient routing —
@all,@peers,@foreman, or specific actor IDs - Read cursors — each agent explicitly marks messages as read via MCP
- Reply & quote — structured
reply_towith quoted context - Attention ACK — priority messages require explicit acknowledgment
- Reply-required obligations — tracked until the recipient responds
- Auto-wake — disabled agents are automatically started when they receive a message
Messages are delivered to actor runtimes through the daemon-managed delivery pipeline, and the daemon tracks delivery state for every message.
Automation & Policies
A built-in rules engine handles operational concerns so you don't have to babysit:
| Policy | What it does | |--------|-------------| | Nudge | Reminds agents about unread messages after a configurable timeout | | Reply-required follow-up | Escalates when required replies are overdue | | Actor idle detection | Notifies foreman when an agent goes silent | | Keepalive | Periodic check-in reminders for the foreman | | Silence detection | Alerts when an entire group goes quiet |
Beyond built-in policies, you can create custom automation rules:
- Interval triggers — "every N minutes, send a standup reminder"
- Cron schedules — "every weekday at 9am, post a status check"
- One-time triggers — "at 5pm today, pause the group"
- Operational actions — set group state or control actor lifecycles (admin-only, one-time only)
Web UI
The built-in Web UI at http://127.0.0.1:8848 provides:
- Chat view with
@mentionautocomplete and reply threading - Per-actor embedded terminals (xterm.js) — see exactly what each agent is doing
- Group & actor management — create, configure, start, stop, restart
- Automation rule editor — configure triggers, schedules, and actions visually
- Context panel — shared vision, sketch, milestones, and tasks
- IM bridge configuration — connect to Telegram/Slack/Discord/Feishu/DingTalk
- Settings — messaging policies, delivery tuning, terminal transcript controls
- Light / Dark / System themes
| Chat | Terminal |
|:----:|:-------:|
|
|
|
Remote access
For accessing the Web UI from outside localhost:
- LAN / private network — bind Web on all local interfaces:
CCCC_WEB_HOST=0.0.0.0 cccc - Cloudflare Tunnel (recommended) —
cloudflared tunnel --url http://127.0.0.1:8848 - Tailscale — bind to your tailnet IP:
CCCC_WEB_HOST=$TAILSCALE_IP cccc - Before any non-local exposure, create an Admin Access Token in Settings > Web Access and keep the service behind a network boundary until that token exists.
- In Settings > Web Access,
127.0.0.1means local-only, while0.0.0.0means localhost plus your LAN IP on a normal local host. If CCCC is running inside WSL2's default NAT networking,0.0.0.0only exposes Web inside WSL; for LAN devices, use WSL mirrored networking or a Windows portproxy/firewall rule. Savestores the target binding. If Web was started byccccorcccc web, useApply nowin **
