Teamclaw
Teamclaw adopts the Auto-OASIS core: an automated, programmable multi‑Agent collaboration engine. Through simple YAML configuration under a main agent, users can define expert collaboration flows — supporting sequential, parallel, and complex programmable structures — running in a backend‑separated execution mode.
Install / Use
/learn @Avalon-467/TeamclawREADME
<a id="english"></a>
TeamClaw

An OpenAI-compatible local AI workspace with Teams, visual multi-agent orchestration, OASIS Town live mode, multimodal I/O, bots, scheduled tasks, and one-click public access.
Quick Start
Install via AI Code CLI
Open any AI coding assistant such as Codex, Cursor, Claude Code, CodeBuddy, or Trae, and say:
Clone https://github.com/Avalon-467/Teamclaw.git, read SKILL.md, and install TeamClaw.
That agent should then:
- Clone the repository
- Read
SKILL.md - Use
docs/index.mdto find the right docs - Configure the environment and LLM settings
- Start the services
Manual Setup
<details> <summary>Click to expand manual setup</summary>Linux / macOS
bash selfskill/scripts/run.sh setup
bash selfskill/scripts/run.sh configure --init
# If you already know the model:
bash selfskill/scripts/run.sh configure --batch \
LLM_API_KEY=sk-xxx \
LLM_BASE_URL=https://api.example.com \
LLM_MODEL=<model>
# If you need model discovery:
bash selfskill/scripts/run.sh configure LLM_API_KEY sk-xxx
bash selfskill/scripts/run.sh configure LLM_BASE_URL https://api.example.com
bash selfskill/scripts/run.sh auto-model
bash selfskill/scripts/run.sh configure LLM_MODEL <model>
bash selfskill/scripts/run.sh start
For managed terminals, CI, or agent runners that reap child processes after the command exits, use bash selfskill/scripts/run.sh start-foreground and keep that session open instead of start.
Windows PowerShell
powershell -ExecutionPolicy Bypass -File .\selfskill\scripts\run.ps1 setup
powershell -ExecutionPolicy Bypass -File .\selfskill\scripts\run.ps1 configure --init
# If you already know the model:
powershell -ExecutionPolicy Bypass -File .\selfskill\scripts\run.ps1 configure --batch LLM_API_KEY=sk-xxx LLM_BASE_URL=https://api.example.com LLM_MODEL=<model>
# If you need model discovery:
powershell -ExecutionPolicy Bypass -File .\selfskill\scripts\run.ps1 configure LLM_API_KEY sk-xxx
powershell -ExecutionPolicy Bypass -File .\selfskill\scripts\run.ps1 configure LLM_BASE_URL https://api.example.com
powershell -ExecutionPolicy Bypass -File .\selfskill\scripts\run.ps1 auto-model
powershell -ExecutionPolicy Bypass -File .\selfskill\scripts\run.ps1 configure LLM_MODEL <model>
powershell -ExecutionPolicy Bypass -File .\selfskill\scripts\run.ps1 start
For managed terminals or automation that reap child processes when the command returns, use powershell -ExecutionPolicy Bypass -File .\selfskill\scripts\run.ps1 start-foreground and keep that session attached.
Open the UI at http://127.0.0.1:<PORT_FRONTEND>.
On Windows, ports may be auto-remapped; trust config/.env or run.ps1 status.
Optional: Public Access
Use Cloudflare Tunnel when you explicitly want remote access:
python scripts/tunnel.py
Or start it via the TeamClaw run scripts / frontend settings panel.
TeamClaw combines a local /v1/chat/completions endpoint, a built-in multi-expert orchestration engine called OASIS, an optional OASIS Town live view in the chat tab, a full Web UI, and integrations such as OpenClaw, Telegram, QQ, audio I/O, scheduled tasks, and Cloudflare Tunnel. It supports any OpenAI-compatible provider — including Antigravity-Manager, a local reverse proxy that gives free access to 67+ models (Claude, Gemini, GPT) for users with a Google One Pro membership (e.g. via student verification), and MiniMax with its 1M-context M2.7 model.
It is designed for both:
- people who want a powerful local AI control center
- AI coding agents that can clone the repo, read
SKILL.md, and install / operate it autonomously
Why TeamClaw
- Team: unified multi-agent orchestration: combine internal agents, OpenClaw agents, and external API agents into a single Team — with one-click import/export of complete Team configurations
- OpenAI-compatible from day one: expose a local
/v1/chat/completionsendpoint that works with standard clients and custom tools - Visual orchestration included: design workflows in OASIS, or save / run YAML workflows directly
- Live observability built in: switch active discussions into OASIS Town and watch / nudge them in real time from the chat tab
- Real operator features: settings UI, group chat, scheduled tasks, voice input, TTS, login tokens, and public tunnel support
- Agent-first operations:
SKILL.md+docs/index.md+docs/repo-index.mdlet other coding agents install and manage TeamClaw with progressive disclosure
What You Can Do Today
| Capability | What It Gives You | |---|---| | OpenAI-compatible API | Local chat completions endpoint for apps, tools, and clients | | Web UI | Chat, settings, OASIS panel, group chat, tunnel control | | OASIS workflows | Sequential, parallel, branching, and DAG-style expert orchestration | | OASIS Town | Turn a live OASIS topic into a pixel-town view in chat, with live residents, nudges, and ambient audio | | Team system | Public/private agents, personas, workflows, and Team snapshots | | OpenClaw + external agents | Bring in external runtimes and API-based agents | | Multimodal I/O | Images, files, voice input, TTS, provider-aware audio defaults | | Bots | Telegram and QQ integrations | | Automation | Scheduled tasks and long-running workflow execution | | Remote access | Cloudflare Tunnel plus login-token / password flows | | Import / export | Share or restore Teams and related assets |
Typical Use Cases
- Local AI workspace: run a private AI assistant with a browser UI and OpenAI-compatible API
- Team debate and execution: let multiple experts challenge, refine, and conclude on the same task
- Live debate observability: watch an OASIS discussion as a pixel town in the chat tab and inject nudges while it is running
- AI integration hub: connect bots, external agent runtimes, and other OpenAI-compatible clients
- Operational cockpit: manage settings, ports, audio, workflows, public access, and users from one place
Product Highlights
OASIS Orchestration
OASIS is the engine that turns TeamClaw from a chatbot into a programmable multi-expert system.
- combine stateless experts, stateful sessions, OpenClaw agents, and external API agents
- run sequential, parallel, selector-based, or DAG-style workflows
- support Team-level personas and reusable saved workflows
- switch the current discussion into OASIS Town for a live pixel-town view inside the chat tab
- monitor topics, conclusions, and session state from CLI or UI
Teams and Personas
Each Team can combine:
- built-in lightweight internal agents
- OpenClaw agents
- external API agents
- public and private expert personas
- reusable workflows and Team snapshots
Bots, Audio, and Operations
TeamClaw is no longer just chat + orchestration. It also includes:
- Telegram and QQ bot integration
- voice input and text-to-speech
- provider-aware audio defaults for OpenAI / Gemini-style setups
- settings UI and restart flow
- login tokens and password-based remote access
- scheduled tasks and system-triggered execution
Acknowledgements
TeamClaw also benefited from several open-source projects:
msitarzewski/agency-agents: inspiration for expanding our preset expert poolAGI-Villa/agent-town: reference for the interaction and presentation design behind OASIS Towntanweai/pua: inspiration for upgrading our original critical expert into a stronger PUA-style reviewer persona
Documentation Paths
Start with the level that matches your task:
SKILL.md: entrypoint skill, install flow, operator guardrailsdocs/index.md: task-based documentation mapdocs/repo-index.md: codebase and data index
Deep dives:
docs/overview.md: product overviewdocs/oasis-reference.md: OASIS runtime model and orchestration referencedocs/runtime-reference.md: architecture, services, auth, and runtime referencedocs/build_team.md: Team creation and member configurationdocs/create_workflow.md: workflow YAML grammar and examplesdocs/cli.md: CLI command referencedocs/openclaw-commands.md: OpenClaw integration commandsdocs/ports.md: ports, exposure, proxy routes
License
MIT License
<a id="中文"></a>
TeamClaw
一个 OpenAI 兼容的本地 AI 工作台:带 Team、多专家可视化编排、OASIS Town 实时模式、多模态输入输出、Bot、定时任务,以及一键公网访问。
快速开始
通过 AI Code CLI 安装
在 Codex、Cursor、Claude Code、CodeBuddy、Trae 之类的 AI 编码助手里输入:
Clone https://github.com/Avalon-467/Teamclaw.git,读取 SKILL.md,然后安装 TeamClaw。
正常情况下,这个 Agent 会自动:
- 克隆仓库
- 阅读
SKILL.md - 通过
docs/index.md找到需要的文档 - 配置环境和 LLM
- 启动服务
手动安装
<details> <summary>点击展开手动安装步骤</summary>Linux / macOS
bash selfskill/scripts/run.sh setup
bash selfskill/scripts/run.sh configure --init
# 如果已经知道模型:
bash selfskill/scripts/run.sh configure --batch \
LLM_API_KEY=sk-xxx \
LLM_BASE_URL=https://api.example.com \
LLM_MODEL=<model>
# 如果还不知道模型:
bash selfskill/scripts/run.sh configure LLM_API_KEY sk-xxx
bash selfskill/scripts/run.sh configure LLM_BASE_URL https://api.example.com
bash selfskill/scripts/run.sh auto-model
bash selfskill/scripts/run.sh configure LLM_MODEL <model>
bash selfskill/scripts/run.sh start
如果你所在的受管终端、CI 或 agent runner 会在命令返回后清理子进程,请改用 `bash selfski
