Swarmd
`swarmd` is a multi-tenant runtime for running background Agents in a safe and secure manner. Agents are defined in YAML and run as goroutines in a multi-tenant server with a virtual shell and custom tools.
Install / Use
/learn @richardartoul/SwarmdQuality Score
Category
Development & EngineeringSupported Platforms
README
swarmd
swarmd sits somewhere in the awkward intersection between "OpenClaw for Enterprise" and "Kubernetes for Agents".
WARNING: swarmd is alpha software. It has not yet been extensively tested at scale or hardened for production environments.
- Overview
- Quick Start
- Deployment
- Examples
- Agent YAML
- Custom Tool Catalog
- Adding Custom Tools
- Acknowledgements
- Motivation
Overview
swarmd is a multi-tenant runtime for running background Agents in a safe and secure manner. Agents are defined in YAML and run as goroutines in a multi-tenant server with a virtual shell and custom tools. swarmd is not a generic sandbox for running existing agent harnesses. It is a stand-alone agent harness designed from the ground up with sandboxing in mind.
version: 1
agent_id: hello-heartbeat
name: Hello Heartbeat
model:
name: gpt-5
prompt: |
Use `server_log` to write exactly one info log entry, then finish with a short confirmation.
tools:
- server_log
schedules:
- id: every-minute
cron: "* * * * *"
timezone: UTC
pkg/agent can also be used directly from Go applications to embed sandboxed agents inside your own application without the full swarmd server. See examples/embedding for small end-to-end embedding examples.
swarmd does not rely on any operating system sandboxing primitives. It will run anywhere you can run a Go application, and it works exactly the same in all environments.
swarmd Agents have zero direct access to the host operating system: filesystem operations go through a filesystem interface that limits access to a specific subdirectory (or fake in-memory filesystem), and network access goes through a network interface with a custom dialer plus a managed HTTP layer for host-owned header injection.
All Agents have access to the same built-in tools:
apply_patch: apply a structured patch to local filesdescribe_image: describe an image through the active provider's native vision API using a sandbox file path, inline base64, or a public image URLgrep_files: search local files with a regular expression and return matching pathshttp_request: make direct HTTP requests for API-style interactionslist_dir: list entries in a local directory with bounded outputread_file: read a local file with numbered, bounded outputread_web_page: fetch a web page and convert it to markdown or textrun_shell: run one sandboxed shell command when no structured tool fitsweb_search: search the public web through the runtime-owned search backend
Additional custom tools can also be compiled into the server, but Agents only receive them on an allow-list basis: the tool must be present in the server binary and explicitly listed under the Agent's tools: field. See Custom Tool Catalog for the currently available custom tools, or Adding Custom Tools to write your own.
Agent activity and state is tracked in a local SQLite database that can be investigated with a local TUI.



The runtime also includes a persistent memory system inside the sandboxed filesystem. Agents can keep durable notes under .memory/, use .memory/ROOT.md as a small index, and load deeper topic files only when they are relevant to the current task.
See Agent YAML for the short version and docs/agent-yaml-guide.md for the full reference.
Quick Start
swarmd expects a config root with a nested directory layout of YAML agent specs under namespaces/<namespace>/agents/*.yaml. There are two easy ways to get started: install the swarmd binary and scaffold that directory structure locally by running swarmd init, or clone this repository and run one of the bundled examples.
The stock swarmd binary supports both OpenAI and Anthropic worker drivers. The bundled example configs still default to OpenAI, so the commands below use OPENAI_API_KEY. Anthropic-backed configs should set model.provider: anthropic and provide ANTHROPIC_API_KEY.
Install The Binary
If you want a local config root to start from, install swarmd directly and let swarmd init create the default directory structure plus a sample heartbeat agent:
go get github.com/richardartoul/swarmd/pkg/server/cmd/swarmd@latest
export OPENAI_API_KEY=your-openai-api-key
swarmd init
swarmd config validate
swarmd server
That bootstraps ./server-config/namespaces/default/agents/server-log-heartbeat.yaml and stores server state under ./data/. In another terminal, open the TUI against that local SQLite database:
swarmd tui
Clone The Repository And Run An Example
If you prefer to start from a checked-in example, clone the repository and point swarmd at one of the example config roots:
git clone https://github.com/richardartoul/swarmd.git
cd swarmd
export OPENAI_API_KEY=your-openai-api-key
go run ./pkg/server/cmd/swarmd server \
--config-root ./examples/agents/hello-heartbeat/server-config \
--data-dir ./.tmp/swarmd/hello-heartbeat
Open the TUI against the example database:
go run ./pkg/server/cmd/swarmd tui \
--db ./.tmp/swarmd/hello-heartbeat/swarmd-server.db
For the full runnable walkthrough, start with examples/agents/hello-heartbeat or browse examples/README.md for more example roots.
Deployment
swarmd is a simple Go binary, so you can deploy it however you want. The easiest place to start is usually a decent-sized virtual machine with the binary, your agent YAML config root, and a persistent disk for the data directory.
The primary database is SQLite, so backups are usually just backups of that database file. In general, agent YAMLs should live in version control, while SQLite is mostly tracking execution history and runtime state. A persistent disk is only required if you want to preserve an agent's filesystem contents or other sandbox state between runs.
See Adding Custom Tools for instructions on deploying tools that are specific to your environment.
Examples
- examples/agents/hello-heartbeat: the smallest scheduled server example using the stock
server_logtool - examples/agents/memory-filesystem: a managed in-memory filesystem example using
runtime.filesystem.kind: memorywith warm state preserved while the same worker stays alive - examples/agents/workspace-summary: a filesystem-heavy example that mounts reusable context and writes a report into a demo workspace
- examples/agents/github-repo-inspector: a networked example that configures
network.reachable_hostsand managedhttp.headers - examples/agents/github-monorepo-assistant: a GitHub custom-tool example that combines repository, review, and CI reads for one shared repo
- examples/embedding: small Go programs that use
pkg/agentdirectly without running the full server
Agent YAML
The root README keeps the short version. The full reference lives in docs/agent-yaml-guide.md.
Filesystem-managed agent specs live under:
server-config/
namespaces/
<namespace>/
agents/
<agent>.yaml
A minimal agent spec looks like this:
version: 1
model:
name: gpt-5
prompt: |
List the files in the current workspace and summarize what you find.
root_path: .
A slightly fuller spec can allow-list a custom tool, open outbound access to a specific host, and inject a server-owned HTTP credential from an environment variable:
version: 1
model:
name: gpt-5
prompt: |
Inspect the repository and query the internal status API.
root_path: .
tools:
- github_read_repo
network:
reachable_hosts:
- glob: api.internal.example.com
http:
headers:
- name: Authorization
env_var: INTERNAL_STATUS_API_TOKEN
domains:
- glob: api.internal.example.com
In this example, github_read_repo is a custom tool explicitly allow-listed under tools:. network.reachable_hosts allows shell and global network tools to reach api.internal.example.com, and http.headers[].env_var injects a server-owned credential for that host without storing the secret in the prompt or workspace. Built-in tools should not be listed under tools:. See Custom Tool Catalog for the stock custom tools and docs/agent-yaml-guide.md for the full YAML reference.
The full guide covers:
- config root layout and path rules
- memory guidance, including the default
.memory/ROOT.mdworkflow - sandbox filesystem and mounts
- network policy and managed HTTP headers
- built-in vs custom structured tools
- runtime tuning, schedules, validation rules, and environment variables
Custom Tool Catalog
The stock tool surface has two parts: built-in tools that every Agent gets automatically, and additional custom tools that Agents only receive when allow-listed under tools:.
All Agents always get these built-in tools, and they should not be listed under tools::
apply_patch: apply a structured patch to local filesdescribe_image: describe an image through the active provider's native vision API using a sandbox file path, inline base64, or a public image URLgrep_files: search local files with a regular expression and return matching pathshttp_request: make direct HT
Related Skills
node-connect
353.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
353.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
353.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
