SkillAgentSearch skills...

Cebus

Multi-agent AI orchestration framework

Install / Use

/learn @cebus-ai/Cebus
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Claude Desktop
GitHub Copilot

README

Cebus

License: MIT Node.js TypeScript

<!-- [![CI](https://github.com/cebus-ai/cebus/actions/workflows/ci.yml/badge.svg)](https://github.com/cebus-ai/cebus/actions/workflows/ci.yml) -->

Welcome A multi-model group chat platform that lets you communicate with multiple AI models simultaneously in a single conversation. Available as a terminal CLI and a web application.

Why "Cebus"? Cebus is a genus of capuchin monkeys — small, highly intelligent primates known for their remarkable problem-solving abilities and collaborative social behavior. Capuchins work together in groups, use tools, and learn from each other. Cebus (the project) brings that same collaborative intelligence to AI: multiple models working together, each contributing their strengths, to solve problems no single model could handle as well alone.

Overview

Cebus creates a collaborative environment where you can chat with multiple AI models (e.g. GPT, Claude, Gemini, Copilot, Ollama) at the same time. Ask a question once and get perspectives from different models, or direct messages to specific participants using @mentions.

Built on LangGraph for orchestration with a hub-and-spoke architecture.

Recommended: use with GitHub Copilot. Cebus is designed to work on top of the Copilot CLI SDK, which gives every model participant built-in agent capabilities — file editing, code search, terminal access — turning the group chat into a collaborative development environment. With Copilot, models don't just talk; they can act on your codebase as a team. Other providers (OpenAI, Anthropic, Gemini, Ollama) work great for conversation and can gain tool capabilities through MCP server configuration, but Copilot provides this out of the box. That said, Copilot is not required — you can use Cebus with just Ollama (completely free and local) or any provider with an API key.

Features

  • Multi-Model Conversations — Chat with OpenAI, Anthropic, Google Gemini, GitHub Copilot, Ollama, and OpenRouter models in the same session
  • Three Interfaces — Terminal CLI, Web UI (Vite/React/Tailwind), and VS Code extension
  • Chat Modes — Choose how models interact:
    • All at Once — All models respond simultaneously
    • One by One — Models take turns responding sequentially
    • Mention Only — Models respond only when @mentioned
    • Role-Based — Assign specialized roles to each model (developer, QA, designer, etc.)
    • Raw — Direct model access without orchestration overhead
  • Web UI — Channel-based messaging with threads, inbox, and direct messages. Rich content blocks (polls, charts, forms, quizzes, Mermaid diagrams, code blocks, diff views). i18n support (en/he) with RTL layout
  • VS Code Extension — Embedded webview with chat, Copilot CLI auto-discovery, and native vscode.lm API integration
  • File Attachments — Paste images and files directly into chat with drag-and-drop support
  • LiveCanvas — Interactive diagram navigation with transformation controls
  • Role-Based Teams — In role-based mode, models are assigned roles from built-in templates or custom .cebus/agents/*.agent.md files. Drag-and-drop reordering for agents and channels
  • Cost Tiers — Models are categorized as premium, middle, budget, or local to help you pick the right balance of cost and capability
  • Flexible Addressing — Broadcast to all models or @mention specific ones
  • Real-Time Streaming — See responses as they're generated with cancellation support
  • Interactive Onboarding — Choose cost tier, models, chat mode, and roles at startup
  • Session Management — Add, remove, or rename participants mid-conversation
  • Project Context — Models can be aware of your current project (CLAUDE.md, git branch, directory structure) with configurable folder access and working directory
  • Session Persistence — Save and resume previous conversations
  • LangGraph Orchestration — Hub-and-spoke architecture with sequential, tag-only, free-chat, raw, dynamic orchestrator, and team leader modes. Context overflow handling with automatic model switching
  • MCP Tool Support — Connect MCP servers for tool-augmented conversations
  • AI Orchestrator — Optional middleware that analyzes messages, orchestrates multi-round agent discussions, and presents plans for complex tasks. Opt-in during onboarding with a user-selectable model (free with Ollama)
  • Team Leader Mode — An advanced orchestration mode where a visible AI facilitator ("Team Leader") manages the group discussion. The TL appears in the conversation timeline, explains its reasoning, asks the user to choose a strategy (round-table discussion vs. divide-and-conquer), orchestrates multi-round collaboration through structured phases (clarify → plan → deliberate → execute → review → synthesize), and drives toward a unified output. Supports user interrupts (stop, skip, restart) and budget-aware guardrails
  • Copilot SDK Integration — Native integration with GitHub Copilot via @github/copilot-sdk. SDK-managed sessions with built-in tools (file editing, shell, search), automatic context compaction, tool approval workflow, and session resumption — no MCP configuration needed
  • OpenRouter Gateway — Multi-provider access through OpenRouter with BYOK support and credit tracking
  • Cost Tracking — Per-model token and cost breakdown in the exit summary. Three-layer pricing resolution (user override, daily cache, built-in defaults). Context window utilization percentage shown during compaction
  • Externalized Prompts — System prompts, mode instructions, and tier guidance live in .md files editable at runtime

Role-Based

Prerequisites

Node.js >= 24.0.0 is required to run Cebus. If you already have Node.js installed, verify your version with node --version and skip ahead to Installation.

macOS:

# Using Homebrew
brew install node

# Verify installation
node --version  # Should show v24.x or higher

Windows:

# Using winget (built into Windows 11)
winget install OpenJS.NodeJS.LTS

# Or download the installer from https://nodejs.org/

# Verify installation (restart terminal first)
node --version  # Should show v24.x or higher

Optional: Bun (For Development Only)

⚠️ Important: Bun does not support better-sqlite3, which is required for session persistence. You can use Bun for building and testing, but Node.js is required to run the CLI.

# macOS / Linux
curl -fsSL https://bun.sh/install | bash

# Windows (PowerShell)
irm bun.sh/install.ps1 | iex

Installation

Quick Install (macOS / Linux)

# 1. Install Node.js if you don't have it (>= 24.0.0 required)
brew install node  # skip if already installed

# 2. Clone and run the install script
git clone https://github.com/cebus-ai/cebus.git
cd cebus
./install.sh

# 3. Reload your shell
source ~/.zshrc  # or source ~/.bashrc for bash users

# 4. Start Cebus
cebus

What the install script does:

  1. Checks if Node.js is installed (exits if not found)
  2. Runs npm install to install dependencies
  3. Runs npm rebuild better-sqlite3 to compile native modules
  4. Detects your shell (.zshrc, .bashrc, or .profile)
  5. Adds the project directory to your PATH
  6. Creates a cebus executable wrapper script
  7. Makes the script executable with chmod +x

After installation, the cebus command will be available globally.

Quick Install (Windows)

# 1. Install Node.js if you don't have it (>= 24.0.0 required)
winget install OpenJS.NodeJS.LTS  # skip if already installed

# 2. Clone the repository
git clone https://github.com/cebus-ai/cebus.git
cd cebus

# 3. Run the install script (either option works)
.\install.cmd
# or: powershell -ExecutionPolicy Bypass -File install.ps1

# 4. Restart your terminal, then start Cebus
cebus

What the install script does:

  1. Checks if Node.js is installed (exits if not found)
  2. Runs npm install --omit=optional (core dependencies only, no provider SDKs)
  3. Builds the project with npm run build
  4. Runs npm link to register the cebus command globally

After restarting your terminal, run cebus config to see available providers, then install only the ones you need:

npm install openai              # For OpenAI + Ollama
npm install @anthropic-ai/sdk   # For Anthropic
npm install @google/genai       # For Google Gemini
npm install @github/copilot-sdk # For GitHub Copilot

From npm (when published)

npm install -g cebus

Minimal Installation (Install Only What You Need)

By default, Cebus installs all provider SDKs (~350MB). If you only use specific providers, you can save disk space:

# Option 1: Skip all optional dependencies (saves ~200MB)
npm install --omit=optional

# Option 2: Install only specific providers you need
npm install --omit=optional
npm install openai                 # For OpenAI
npm install @anthropic-ai/sdk      # For Anthropic
npm install @google/genai          # For Google Gemini
npm install @github/copilot-sdk    # For GitHub Copilot
# Ollama uses the openai package for compatibility

Provider SDK sizes:

  • @github/copilot-sdk - ~264MB (includes PowerShell scripts, native binaries)
  • openai - ~12MB (also required for Ollama)
  • @google/genai - ~8MB
  • @anthropic-ai/sdk - ~6MB

Note: If you try to use a provider without its SDK installed, you'll get a helpful error message with installation instructions.

Manual Installation (Without Install Script)

If you prefer not to use the install script, or wan

View on GitHub
GitHub Stars24
CategoryDevelopment
Updated12h ago
Forks0

Languages

TypeScript

Security Score

95/100

Audited on Apr 1, 2026

No findings