SkillAgentSearch skills...

Opencrabs

The self-hosted AI agent. Self-improving. Fully autonomous. Single binary. Built with Ratatui

Install / Use

/learn @adolfousier/Opencrabs
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Rust Rust Edition Ratatui Docker CI GitHub Stars

🦀 OpenCrabs

The autonomous, self-improving AI agent. Single Rust binary. Every channel.

Autonomous, self-improving multi-channel AI agent built in Rust. Inspired by Open Claw.

    ___                    ___           _
   / _ \ _ __  ___ _ _    / __|_ _ __ _| |__  ___
  | (_) | '_ \/ -_) ' \  | (__| '_/ _` | '_ \(_-<
   \___/| .__/\___|_||_|  \___|_| \__,_|_.__//__/
        |_|

 🦀 The autonomous, self-improving AI agent. Single Rust binary. Every channel.

Author: Adolfo Usier

⭐ Star us on GitHub if you like what you see!


Why OpenCrabs?

OpenCrabs runs as a single binary on your terminal — no server, no gateway, no infrastructure. It makes direct HTTPS calls to LLM providers from your machine. Nothing else leaves your computer.

OpenCrabs vs Node.js Agent Frameworks

| | OpenCrabs (Rust) | Node.js Frameworks (e.g. Open Claw) | |---|---|---| | Binary size | 17–22 MB single binary, zero dependencies | 1 GB+ node_modules with hundreds of transitive packages | | Runtime | None — runs natively | Requires Node.js runtime + npm install | | Attack surface | Zero network listeners. Outbound HTTPS only | Server infrastructure: open ports, auth layers, middleware | | API key security | Keys on your machine only. zeroize clears them from RAM on drop, [REDACTED] in all debug output | Keys in env vars or config. GC doesn't guarantee memory clearing. Heap dumps can leak secrets | | Data residency | 100% local — SQLite DB, embeddings, brain files, all in ~/.opencrabs/ | Server-side storage, potential multi-tenant data, network transit | | Supply chain | Single compiled binary. Rust's type system prevents buffer overflows, use-after-free, data races at compile time | npm ecosystem: typosquatting, dependency confusion, prototype pollution | | Memory safety | Compile-time guarantees — no GC, no null pointers, no data races | GC-managed, prototype pollution, type coercion bugs | | Concurrency | tokio async + Rust ownership = zero data races guaranteed | Single-threaded event loop, worker threads share memory unsafely | | Native TTS/STT | Built-in local speech-to-text (whisper.cpp) and text-to-speech — ~130 MB total stack, fully offline | No native voice. Requires external APIs (Google, AWS, Azure) or heavy Python dependencies (PyTorch, ~5 GB+) | | Telemetry | Zero. No analytics, no tracking, no remote logging | Server infra typically includes monitoring, logging pipelines, APM |

What stays local (never leaves your machine)

  • All chat sessions and messages (SQLite)
  • Tool executions (bash, file reads/writes, git)
  • Memory and embeddings (local vector search)
  • Voice transcription in local STT mode (whisper.cpp, on-device)
  • Brain files, config, API keys

What goes out (only when you use it)

  • Your messages to the LLM provider API (Anthropic, OpenAI, GitHub Copilot, etc.)
  • Web search queries (optional tool)
  • GitHub API via gh CLI (optional tool)
  • Browser automation (optional, browser feature — auto-detects Chromium-based browsers via CDP, not Firefox)
  • Dynamic tool HTTP requests (only when you define HTTP tools in tools.toml)

Table of Contents


📸 Screenshots

https://github.com/user-attachments/assets/7f45c5f8-acdf-48d5-b6a4-0e4811a9ee23


🎯 Core Features

AI & Providers

| Feature | Description | |---------|-------------| | Multi-Provider | Anthropic Claude, OpenAI, GitHub Copilot (uses your Copilot subscription), OpenRouter (400+ models), MiniMax, Google Gemini, z.ai GLM (General API + Coding API), Claude CLI, OpenCode CLI, and any OpenAI-compatible API (Ollama, LM Studio, LocalAI). Model lists fetched live from provider APIs — new models available instantly. Each session remembers its provider + model and restores it on switch | | Fallback Providers | Configure a chain of fallback providers — if the primary fails, each fallback is tried in sequence automatically. Any configured provider can be a fallback. Config: [providers.fallback] providers = ["openrouter", "anthropic"] | | Per-Provider Vision | Set vision_model per provider — the LLM calls analyze_image as a tool, which uses the vision model on the same provider API to describe images. The chat model stays the same and gets vision capability via tool call. Gemini vision takes priority when configured. Auto-configured for known providers (e.g. MiniMax) on first run | | Real-time Streaming | Character-by-character response streaming with animated spinner showing model name and live text | | Claude CLI | Use your Claude Code CLI directly via the claude CLI — no additional setup. Just install Claude Code, authenticate, and select Claude CLI as your provider | | Local LLM Support | Run with LM Studio, Ollama, or any OpenAI-compatible endpoint — 100% private, zero-cost | | Cost Tracking | Per-message token count and cost displayed in header; /usage shows all-time breakdown grouped by model with real costs + estimates for historical sessions | | Context Awareness | Live context usage indicator showing actual token counts (e.g. ctx: 45K/200K (23%)); auto-compaction at 70% with tool overhead budgeting; accurate tiktoken-based counting calibrated against API actuals | | 3-Tier Memory | (1) Brain MEMORY.md — user-curated durable memory loaded every turn, (2) Daily Logs — auto-compaction summaries at ~/.opencrabs/memory/YYYY-MM-DD.md, (3) Hybrid Memory Search — FTS5 keyword search + local vector embeddings (embeddinggemma-300M, 768-dim) combined via Reciprocal Rank Fusion. Runs entirely local — no API key, no cost, works offline | | Dynamic Brain System | System brain assembled from workspace MD files (SOUL, IDENTITY, USER, AGENTS, TOOLS, MEMORY) — all editable live between turns | | Multi-Agent Orchestration | Spawn independent child agents for parallel task execution. Five tools: spawn_agent, wait_agent, send_input, close_agent, resume_agent. Children run in isolated sessions with auto-approve and essential tools — no recursive spawning |

Multimodal Input

| Feature | Description | |---------|-------------| | Image Attachments | Paste image paths or URLs into the input — auto-detected and attached as vision content blocks for multimodal models | | PDF Support | Attach PDF files by path — native Anthropic PDF support; for other providers, text is extracted locally via pdf-extract | | Document Parsing | Built-in parse_document tool extracts text from PDF, DOCX, HTML, TXT, MD, JSON, XML | | Voice (STT) | Voice notes transcribed via API (Groq Whisper whisper-large-v3-turbo) or Local (whisper.cpp via whisper-rs, runs on-device). Choose mode in /onboard:voice. Local mode: select model size (Tiny 75 MB / Base 142 MB / Small 466 MB / Medium 1.5 GB), download from HuggingFace, zero API cost. Included by default | | Voice (TTS) | Agent replies to voice notes with audio via API (OpenAI TTS gpt-4o-mini-tts) or Local (Piper TTS, runs on-device via Python venv). Choose mode in /onboard:voice. Local mode: select voice (Ryan / Amy / Lessac / Kristin / Joe / Cori), auto-downloads from HuggingFace, zero API cost. Falls back to text if disabled | | Attachment Indicator | Attached images show as [IMG1:filename.png] in the input title bar | | Image Generation | Agent generates images via Google Gemini (gemini-3.1-flash-image-preview "Nano Banana") using the generate_image tool — enabled via /onboard:image. Returned as native images/attachments in all channels |

Messaging Integrations

| Feature | Description | |---------|-------------| | Telegram Bot | Full-featured Telegram bot — owner DMs share TUI session, groups get isolated per-group sessions (keyed by chat ID). Photo/voice support, allowed user IDs, allowed chat/group IDs, respond_to filter (all/dm_only/mention). Passive group message ca

Related Skills

View on GitHub
GitHub Stars570
CategoryDevelopment
Updated5h ago
Forks52

Languages

Rust

Security Score

100/100

Audited on Mar 28, 2026

No findings