SkillAgentSearch skills...

Swarmd

`swarmd` is a multi-tenant runtime for running background Agents in a safe and secure manner. Agents are defined in YAML and run as goroutines in a multi-tenant server with a virtual shell and custom tools.

Install / Use

/learn @richardartoul/Swarmd
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Claude Desktop

README

swarmd

swarmd sits somewhere in the awkward intersection between "OpenClaw for Enterprise" and "Kubernetes for Agents".

WARNING: swarmd is alpha software. It has not yet been extensively tested at scale or hardened for production environments.

Overview

swarmd is a multi-tenant runtime for running background Agents in a safe and secure manner. Agents are defined in YAML and run as goroutines in a multi-tenant server with a virtual shell and custom tools. swarmd is not a generic sandbox for running existing agent harnesses. It is a stand-alone agent harness designed from the ground up with sandboxing in mind.

version: 1
agent_id: hello-heartbeat
name: Hello Heartbeat
model:
  name: gpt-5
prompt: |
  Use `server_log` to write exactly one info log entry, then finish with a short confirmation.
tools:
  - server_log
schedules:
  - id: every-minute
    cron: "* * * * *"
    timezone: UTC

pkg/agent can also be used directly from Go applications to embed sandboxed agents inside your own application without the full swarmd server. See examples/embedding for small end-to-end embedding examples.

swarmd does not rely on any operating system sandboxing primitives. It will run anywhere you can run a Go application, and it works exactly the same in all environments.

swarmd Agents have zero direct access to the host operating system: filesystem operations go through a filesystem interface that limits access to a specific subdirectory (or fake in-memory filesystem), and network access goes through a network interface with a custom dialer plus a managed HTTP layer for host-owned header injection.

All Agents have access to the same built-in tools:

  • apply_patch: apply a structured patch to local files
  • describe_image: describe an image through the active provider's native vision API using a sandbox file path, inline base64, or a public image URL
  • grep_files: search local files with a regular expression and return matching paths
  • http_request: make direct HTTP requests for API-style interactions
  • list_dir: list entries in a local directory with bounded output
  • read_file: read a local file with numbered, bounded output
  • read_web_page: fetch a web page and convert it to markdown or text
  • run_shell: run one sandboxed shell command when no structured tool fits
  • web_search: search the public web through the runtime-owned search backend

Additional custom tools can also be compiled into the server, but Agents only receive them on an allow-list basis: the tool must be present in the server binary and explicitly listed under the Agent's tools: field. See Custom Tool Catalog for the currently available custom tools, or Adding Custom Tools to write your own.

Agent activity and state is tracked in a local SQLite database that can be investigated with a local TUI.

TUI screenshot 1

TUI screenshot 2

TUI screenshot 3

The runtime also includes a persistent memory system inside the sandboxed filesystem. Agents can keep durable notes under .memory/, use .memory/ROOT.md as a small index, and load deeper topic files only when they are relevant to the current task.

See Agent YAML for the short version and docs/agent-yaml-guide.md for the full reference.

Quick Start

swarmd expects a config root with a nested directory layout of YAML agent specs under namespaces/<namespace>/agents/*.yaml. There are two easy ways to get started: install the swarmd binary and scaffold that directory structure locally by running swarmd init, or clone this repository and run one of the bundled examples.

The stock swarmd binary supports both OpenAI and Anthropic worker drivers. The bundled example configs still default to OpenAI, so the commands below use OPENAI_API_KEY. Anthropic-backed configs should set model.provider: anthropic and provide ANTHROPIC_API_KEY.

Install The Binary

If you want a local config root to start from, install swarmd directly and let swarmd init create the default directory structure plus a sample heartbeat agent:

go get github.com/richardartoul/swarmd/pkg/server/cmd/swarmd@latest
export OPENAI_API_KEY=your-openai-api-key
swarmd init
swarmd config validate
swarmd server

That bootstraps ./server-config/namespaces/default/agents/server-log-heartbeat.yaml and stores server state under ./data/. In another terminal, open the TUI against that local SQLite database:

swarmd tui

Clone The Repository And Run An Example

If you prefer to start from a checked-in example, clone the repository and point swarmd at one of the example config roots:

git clone https://github.com/richardartoul/swarmd.git
cd swarmd
export OPENAI_API_KEY=your-openai-api-key
go run ./pkg/server/cmd/swarmd server \
  --config-root ./examples/agents/hello-heartbeat/server-config \
  --data-dir ./.tmp/swarmd/hello-heartbeat

Open the TUI against the example database:

go run ./pkg/server/cmd/swarmd tui \
  --db ./.tmp/swarmd/hello-heartbeat/swarmd-server.db

For the full runnable walkthrough, start with examples/agents/hello-heartbeat or browse examples/README.md for more example roots.

Deployment

swarmd is a simple Go binary, so you can deploy it however you want. The easiest place to start is usually a decent-sized virtual machine with the binary, your agent YAML config root, and a persistent disk for the data directory.

The primary database is SQLite, so backups are usually just backups of that database file. In general, agent YAMLs should live in version control, while SQLite is mostly tracking execution history and runtime state. A persistent disk is only required if you want to preserve an agent's filesystem contents or other sandbox state between runs.

See Adding Custom Tools for instructions on deploying tools that are specific to your environment.

Examples

Agent YAML

The root README keeps the short version. The full reference lives in docs/agent-yaml-guide.md.

Filesystem-managed agent specs live under:

server-config/
  namespaces/
    <namespace>/
      agents/
        <agent>.yaml

A minimal agent spec looks like this:

version: 1
model:
  name: gpt-5
prompt: |
  List the files in the current workspace and summarize what you find.
root_path: .

A slightly fuller spec can allow-list a custom tool, open outbound access to a specific host, and inject a server-owned HTTP credential from an environment variable:

version: 1
model:
  name: gpt-5
prompt: |
  Inspect the repository and query the internal status API.
root_path: .
tools:
  - github_read_repo
network:
  reachable_hosts:
    - glob: api.internal.example.com
http:
  headers:
    - name: Authorization
      env_var: INTERNAL_STATUS_API_TOKEN
      domains:
        - glob: api.internal.example.com

In this example, github_read_repo is a custom tool explicitly allow-listed under tools:. network.reachable_hosts allows shell and global network tools to reach api.internal.example.com, and http.headers[].env_var injects a server-owned credential for that host without storing the secret in the prompt or workspace. Built-in tools should not be listed under tools:. See Custom Tool Catalog for the stock custom tools and docs/agent-yaml-guide.md for the full YAML reference.

The full guide covers:

  • config root layout and path rules
  • memory guidance, including the default .memory/ROOT.md workflow
  • sandbox filesystem and mounts
  • network policy and managed HTTP headers
  • built-in vs custom structured tools
  • runtime tuning, schedules, validation rules, and environment variables

Custom Tool Catalog

The stock tool surface has two parts: built-in tools that every Agent gets automatically, and additional custom tools that Agents only receive when allow-listed under tools:.

All Agents always get these built-in tools, and they should not be listed under tools::

  • apply_patch: apply a structured patch to local files
  • describe_image: describe an image through the active provider's native vision API using a sandbox file path, inline base64, or a public image URL
  • grep_files: search local files with a regular expression and return matching paths
  • http_request: make direct HT

Related Skills

View on GitHub
GitHub Stars23
CategoryDevelopment
Updated4d ago
Forks0

Languages

Go

Security Score

95/100

Audited on Apr 5, 2026

No findings