Microclaw
🦀An agentic AI assistant that lives in your chats, inspired by nanoclaw and incorporating some of its design ideas. Built with Rust 🦀
Install / Use
/learn @microclaw/MicroclawREADME
MicroClaw
<img src="icon.png" alt="MicroClaw logo" width="56" align="right" /> <p align="center"> <img src="screenshots/headline.png" alt="MicroClaw headline logo" width="92%" /> </p> <p align="center"> <strong>One agent runtime for Telegram, Discord, Slack, Feishu, IRC, Web, and more.</strong><br /> Multi-step tool use, persistent memory, scheduled tasks, skills, MCP, and a local web control plane. </p> <p align="center"> <a href="#quick-start">Quick Start</a> | <a href="#install">Install</a> | <a href="#why-microclaw">Why MicroClaw</a> | <a href="#how-it-works">Architecture</a> | <a href="#documentation">Docs</a> </p> <p align="center"> <strong>Quick Routes:</strong> <a href="docs/generated/tools.md">Tools</a> · <a href="docs/generated/config-defaults.md">Config Defaults</a> · <a href="docs/generated/provider-matrix.md">Provider Matrix</a> · <a href="docs/operations/runbook.md">Runbook</a> · <a href="docs/operations/http-hook-trigger.md">Web Hooks</a> · <a href="docs/clawhub/overview.md">ClawHub</a> </p>MicroClaw is an agent runtime for chat surfaces. It gives you one channel-agnostic agent loop, one provider-agnostic LLM layer, and one persistent runtime that can move across Telegram, Discord, Slack, Feishu/Lark, IRC, Web, and additional adapters over time.
It works with Anthropic and OpenAI-compatible providers, supports multi-step tool execution, keeps session state across turns, stores durable memory, runs scheduled tasks, and can expose the same runtime through both chat channels and a local web UI.
<p align="center"> <img src="screenshots/screenshot1.png" width="45%" /> <img src="screenshots/screenshot2.png" width="45%" /> </p>Why MicroClaw
- One runtime, many channels: keep the same agent loop, tools, memory, and policies across chat platforms.
- Built for agentic execution: tool calls, tool-result reflection, sub-agents, planning, and mid-run updates are first-class.
- Persistent by default: sessions resume, memory survives restarts, and scheduled tasks keep running in the background.
- Provider-agnostic: use Anthropic or OpenAI-compatible APIs without rewriting the runtime.
- Extensible where it matters: add skills, MCP servers, plugins, hooks, and new channel adapters without replacing the core.
Quick Start
Install:
curl -fsSL https://microclaw.ai/install.sh | bash
Run diagnostics:
microclaw doctor
Create config with the interactive wizard:
microclaw setup
Start the runtime:
microclaw start
Default local web UI:
http://127.0.0.1:10961
If you want a source build instead, jump to Install. If you want operational details, start with Setup and Documentation.
Install
One-line installer (recommended)
curl -fsSL https://microclaw.ai/install.sh | bash
For the full variant (includes Matrix channel support):
curl -fsSL https://microclaw.ai/install.sh | bash -s -- --full
Windows PowerShell installer
iwr https://microclaw.ai/install.ps1 -UseBasicParsing | iex
For the full variant (adds Matrix channel) on Windows:
& ([scriptblock]::Create((iwr https://microclaw.ai/install.ps1 -UseBasicParsing).Content)) -Full
This installer only does one thing:
- Download and install the matching prebuilt binary from the latest GitHub release
- It does not fallback to Homebrew/Cargo inside
install.sh(use separate methods below)
Upgrade in place later:
microclaw upgrade
Preflight diagnostics
Run cross-platform diagnostics before first start (or when troubleshooting):
microclaw doctor
Machine-readable output for support tickets:
microclaw doctor --json
Checks include PATH, shell runtime, agent-browser, PowerShell policy (Windows), and MCP command dependencies from <data_dir>/mcp.json plus <data_dir>/mcp.d/*.json.
Sandbox-only diagnostics:
microclaw doctor sandbox
Uninstall (script)
macOS/Linux:
curl -fsSL https://microclaw.ai/uninstall.sh | bash
Windows PowerShell:
iwr https://microclaw.ai/uninstall.ps1 -UseBasicParsing | iex
Homebrew (macOS)
brew tap microclaw/tap
brew install microclaw # default
brew install microclaw-full # full (adds Matrix channel)
Docker image
Release tags publish an official container image to:
ghcr.io/microclaw/microclaw:latestghcr.io/microclaw/microclaw:<version>docker.io/microclaw/microclaw:latestwhen Docker Hub publishing credentials are configured for the repository
For first-time pulls from GHCR, you may need:
docker login ghcr.io
Use your GitHub username and a Personal Access Token with read:packages.
Quickest way to try the image:
docker pull ghcr.io/microclaw/microclaw:latest
docker run --rm -it \
-p 127.0.0.1:10961:10961 \
ghcr.io/microclaw/microclaw:latest
Recommended for real use: keep config and runtime data on the host:
mkdir -p data tmp
chmod a+r microclaw.config.yaml
chmod -R a+rwX data tmp
docker run --rm -it \
-p 127.0.0.1:10961:10961 \
-v "$(pwd)/microclaw.config.yaml:/app/microclaw.config.yaml:ro" \
-v "$(pwd)/data:/home/microclaw/.microclaw" \
-v "$(pwd)/tmp:/app/tmp" \
ghcr.io/microclaw/microclaw:latest
Why mount them:
microclaw.config.yaml: keep configuration outside the containerdata/: persist sessions, memory, skills, database, and runtime statetmp/: provide a writable temp directory for container-side work
The image entrypoint is microclaw, so you can override the command directly:
docker run --rm ghcr.io/microclaw/microclaw:latest doctor
docker run --rm ghcr.io/microclaw/microclaw:latest version
If startup fails with Permission denied (os error 13), re-check the chmod commands above and verify the mounted paths exist.
From source
git clone https://github.com/microclaw/microclaw.git
cd microclaw
cargo build --release
cp target/release/microclaw /usr/local/bin/
Optional full build with heavier integrations enabled:
cargo build --release --features full
full currently enables channel-matrix. The default build includes all channels except Matrix (including MCP support). The full build adds the Matrix SDK.
Optional semantic-memory build (sqlite-vec disabled by default):
cargo build --release --features sqlite-vec
First-time sqlite-vec quickstart (3 commands):
cargo run --features sqlite-vec -- setup
cargo run --features sqlite-vec -- start
sqlite3 <data_dir>/runtime/microclaw.db "SELECT id, chat_id, chat_channel, external_chat_id, category, embedding_model FROM memories ORDER BY id DESC LIMIT 20;"
In setup, set:
embedding_provider=openaiorollama- provider credentials/base URL/model as needed
How it works
Every message goes through a shared agent loop:
- Load file memory, structured memory, skills, and resumable session state
- Call the configured model with tool schemas and runtime context
- Execute tool calls, append results, and continue the loop until completion
- Persist the updated session, memory signals, and observability data
This keeps behavior consistent across channels and lets one runtime power interactive chat, scheduled work, web-triggered automation, and sub-agent execution.
<p align="center"> <img src="docs/assets/readme/microclaw-architecture.svg" alt="MicroClaw architecture overview" width="96%" /> </p>Blog post
For a deeper dive into the architecture and design decisions, read: Building MicroClaw: An Agentic AI Assistant in Rust That Lives in Your Chats
Features
- Agentic tool use -- bash commands, file read/write/edit, glob search, regex grep, persistent memory
- Session resume -- full conversation state (including tool interactions) persisted between messages; the agent keeps tool-call state across invocations
- Context compaction -- when sessions grow too large, older messages are automatically summarized to stay within context limits
- Sub-agent -- delegate self-contained sub-tasks to a parallel agent with restricted tools
- Agent skills -- extensible skill system (Anthropic Skills compatible); skills are auto-discovered from
<data_dir>/skills/and activated on demand - Plan & execute -- todo list tools for breaking down complex tasks, tracking progress step by step
- Platform-extensible architecture -- shared agent loop + tool system + storage, with platform adapters for channel-specific ingress/egress
- Web search -- search the web via DuckDuckGo and fetch/parse web pages
- Scheduled tasks -- cron-based recurring tasks and one-time scheduled tasks, managed through natural language
- Mid-conversation messaging -- the agent can send intermediate messages before its final response
- Mention catch-up (Telegram groups) -- when mentioned in a Telegram group, the bot reads all messages since its last reply (not just the last N)
- Continuous typing indicator -- typing indicator stays active for the full duration of processing
- Persistent memory -- AGENTS.md files at global, bot/account, and per-chat scopes, loaded into every request
- Message splitting -- long responses are automatically split at newline boundaries to fit channel limits (Telegram 4096 / Di
Related Skills
himalaya
346.8kCLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language).
taskflow
346.8kname: taskflow description: Use when work should span one or more detached tasks but still behave like one job with a single owner context. TaskFlow is the durable flow substrate under authoring layer
coding-agent
346.8kDelegate coding tasks to Codex, Claude Code, or Pi agents via background process
tavily
346.8kTavily web search, content extraction, and research tools.
