Shannon
A production-oriented multi-agent orchestration framework.
Install / Use
/learn @Kocoro-lab/ShannonREADME
Shannon — Production AI Agents That Actually Work
Ship reliable AI agents to production. Multi-strategy orchestration, swarm collaboration, token budget control, human approval workflows, and time-travel debugging — all built in. <a href="https://shannon.run" target="_blank">Live Demo →</a>
<div align="center">
View real-time agent execution and event streams
</div> <div align="center">
Shannon open-source platform architecture — multi-agent orchestration with execution strategies, WASI sandboxing, and built-in observability
</div>Why Shannon?
| The Problem | Shannon's Solution | | ----------------------------------- | ------------------------------------------------------------ | | Agents fail silently? | Temporal workflows with time-travel debugging — replay any execution step-by-step | | Costs spiral out of control? | Hard token budgets per task/agent with automatic model fallback | | No visibility into what happened? | Real-time dashboard, Prometheus metrics, OpenTelemetry tracing | | Security concerns? | WASI sandbox for code execution, OPA policies, multi-tenant isolation | | Vendor lock-in? | Works with OpenAI, Anthropic, Google, DeepSeek, local models |
Quick Start
Prerequisites
- Docker and Docker Compose
- An API key for at least one LLM provider (OpenAI, Anthropic, etc.)
Installation
Quick Install:
curl -fsSL https://raw.githubusercontent.com/Kocoro-lab/Shannon/v0.3.1/scripts/install.sh | bash
This downloads config, prompts for API keys, pulls Docker images, and starts services.
Required API Keys (choose one):
- OpenAI:
OPENAI_API_KEY=sk-... - Anthropic:
ANTHROPIC_API_KEY=sk-ant-... - Or any OpenAI-compatible endpoint
Optional but recommended:
- Web Search:
SERPAPI_API_KEY=...(get key at serpapi.com) - Web Fetch:
FIRECRAWL_API_KEY=...(get key at firecrawl.dev)
Setting API keys: The install script prompts you to edit .env during setup. To update keys later:
cd ~/shannon # or your install directory
nano .env # edit API keys
docker compose -f docker-compose.release.yml down
docker compose -f docker-compose.release.yml up -d
Building from source? See Development below.
Platform-specific guides: Ubuntu · Rocky Linux · Windows · Windows (中文)
Your First Agent
Shannon provides multiple ways to interact with AI agents. Choose the option that works best for you:
Option 1: REST API
Use Shannon's HTTP REST API directly. For complete API documentation, see docs.shannon.run.
# Submit a task
curl -X POST http://localhost:8080/api/v1/tasks \
-H "Content-Type: application/json" \
-d '{
"query": "What is the capital of France?",
"session_id": "demo-session"
}'
# Response: {"task_id":"task-dev-123","status":"running"}
# Stream events in real-time
curl -N "http://localhost:8080/api/v1/stream/sse?workflow_id=task-dev-123"
# Get final result
curl "http://localhost:8080/api/v1/tasks/task-dev-123"
Perfect for:
- Integrating Shannon into existing applications
- Automation scripts and workflows
- Language-agnostic integration
Option 2: Python SDK
Install the official Shannon Python SDK:
pip install shannon-sdk
from shannon import ShannonClient
# Create client
with ShannonClient(base_url="http://localhost:8080") as client:
# Submit task
handle = client.submit_task(
"What is the capital of France?",
session_id="demo-session"
)
# Wait for completion
result = client.wait(handle.task_id)
print(result.result)
Perfect for:
- Python-based applications and notebooks
- Data science workflows
- Batch processing and automation
See Python SDK Documentation for the full API reference.
Option 3: Native Desktop App
Download pre-built desktop applications from GitHub Releases:
- macOS (Universal) — Intel & Apple Silicon
- Windows (x64) — MSI or EXE installer
- Linux (x64) — AppImage or DEB package
Or build from source:
cd desktop
npm install
npm run tauri:build # Builds for your platform
Native app benefits:
- System tray integration and native notifications
- Offline task history (Dexie.js local database)
- Better performance and lower memory usage
- Auto-updates from GitHub releases
See Desktop App Guide for more details.
Option 4: Web UI (Needs Source Download)
Run the desktop app as a local web server for development:
# In a new terminal (backend should already be running)
cd desktop
npm install
npm run dev
# Open http://localhost:3000 in your browser
Perfect for:
- Quick testing and exploration
- Development and debugging
- Real-time event streaming visualization
Configuring Tool API Keys
Add these to your .env file based on which tools you need:
# Web Search (choose one provider)
WEB_SEARCH_PROVIDER=serpapi # serpapi | searchapi | google | bing | exa
SERPAPI_API_KEY=your-serpapi-key # serpapi.com
# OR
SEARCHAPI_API_KEY=your-searchapi-key # searchapi.io
# OR
GOOGLE_SEARCH_API_KEY=your-google-key # Google Custom Search
GOOGLE_SEARCH_ENGINE_ID=your-engine-id
# Web Fetch/Crawl (for deep research)
WEB_FETCH_PROVIDER=firecrawl # firecrawl | exa | python
FIRECRAWL_API_KEY=your-firecrawl-key # firecrawl.dev (recommended for production)
Tip: For quick setup, just add
SERPAPI_API_KEY. Get a key at serpapi.com.
Ports & Endpoints
| Service | Port | Endpoint | Purpose |
|---------|------|----------|---------|
| Gateway | 8080 | http://localhost:8080 | REST API, OpenAI-compatible /v1 |
| Admin/Events | 8081 | http://localhost:8081 | SSE/WebSocket streaming, health |
| Orchestrator | 50052 | localhost:50052 | gRPC (internal) |
| Temporal UI | 8088 | http://localhost:8088 | Workflow debugging |
| Grafana | 3030 | http://localhost:3030 | Metrics dashboard |
Additional API Endpoints
Daemon & Real-time Messaging:
GET /api/v1/daemon/status— Daemon connection statusWebSocket /v1/ws/messages— Real-time message delivery to connected CLI daemons
Channels (Messaging Integrations):
POST/GET/PUT/DELETE /api/v1/channels— CRUD for messaging channel integrations (Slack, LINE)POST /api/v1/channels/{channel_id}/webhook— Inbound webhook endpoint
Workspace Files:
GET /api/v1/sessions/{sessionId}/files— List session workspace filesGET /api/v1/sessions/{sessionId}/files/{path}— Download a workspace file
Architecture
See the architecture diagram above for the full platform overview including execution strategies, sandbox isolation, and tool ecosystem.
Components:
- Orchestrator (Go) — Task routing, budget enforcement, session management, OPA policies
- Agent Core (Rust) — WASI sandbox, policy enforcement, session workspaces, file operations
- LLM Service (Python) — Provider abstraction (15+ LLMs), MCP tools, skills system
- Data Layer — PostgreSQL (state), Redis (sessions), Qdrant (vector memory)
Core Capabilities
OpenAI-Compatible API
# Drop-in replacement for OpenAI API
export OPENAI_API_BASE=http://localhost:8080/v1
# Your existing OpenAI code works unchanged
Real-time Event Streaming
# Monitor agent execution in real-time (SSE)
curl -N "http://localhost:8080/api/v1/stream/sse?workflow_id=task-dev-123"
# Events include:
# - WORKFLOW_STARTED, WORKFLOW_COMPLETED
# - AGENT_STARTED, AGENT_COMPLETED
# - TOOL_INVOKED, TOOL_OBSERVATION
# - LLM_PARTIAL, LLM_OUTPUT
Skills System
# List available skills
curl http://localhost:8080/api/v1/skills
# Execute task with a skill (skill becomes system prompt)
curl -X POST http://localhost:8080/api/v1/tasks \
-H "Content-Type: application/json" \
-d '{
"query": "Review the auth module for security issues",
"skill": "code-review",
"session_id": "review-123"
}'
Create custom skills in config/skills/user/. See Skills System.
WASI Sandbox & Session Workspaces
# Agents execute code in isolated WASI sandboxes — no network, read-only FS
# Each session gets its own workspace at /tmp/shannon-sessions/{session_id}/
curl -X POST http://localhost:8080/api/v1/tasks \
-H "Content-Type: application/json" \
-d '{
"query": "Run this Python script and save the output",
"session_id": "my-workspace"
}'
WASI sandbox provides secure code
