Zenii
Your machine's AI brain. One 20MB binary gives every tool, script, and cron job shared AI memory + 114 API routes. Desktop app, CLI, Telegram — all connected. Rust-powered.
Install / Use
/learn @sprklai/ZeniiREADME
Zenii (zen-ee-eye)
<!-- <p align="center"> <img src="crates/zenii-desktop/icons/icon.png" alt="Zenii" width="180" /> </p> --> <p align="center"> <img src="assets/zenii-master.gif" alt="Zenii Demo" width="720" /> </p> <h1 align="center">Zenii: 20 megabytes. AI everywhere.</h1> <h3 align="center"><em>Every tool on your machine shares one AI brain.</em></h3> <p align="center">If Zenii looks useful, <a href="https://github.com/sprklai/zenii">star the repo</a> to help others find it.</p> <p align="center"><code>20 MB desktop · 114 API routes · 18 tools · 6+ model providers · 1,500+ tests · One shared brain</code></p> <p align="center"> Install <strong>one binary</strong>. Now your scripts have <strong>AI memory</strong>. Your cron jobs <strong>reason</strong>. Your Telegram bot <strong>thinks</strong>.<br> And they all share the same brain — same memory, same tools, one address.<br> A private AI backend for everything on your machine — native desktop app, plugins in <strong>any language</strong>, and an API your <code>curl</code> can call. Powered by Rust.<br> <a href="https://zenii.sprklai.com">https://zenii.sprklai.com</a><br> <a href="https://docs.zenii.sprklai.com">https://docs.zenii.sprklai.com</a> </p> <p align="center"> <a href="https://github.com/sprklai/zenii/releases/latest"> <img src="https://img.shields.io/github/v/release/sprklai/zenii?style=flat-square" alt="Latest Release" /> </a> <a href="https://github.com/sprklai/zenii/actions/workflows/ci.yml"> <img src="https://img.shields.io/github/actions/workflow/status/sprklai/zenii/ci.yml?style=flat-square&label=CI" alt="CI" /> </a> <a href="LICENSE"> <img src="https://img.shields.io/badge/license-MIT-green?style=flat-square" alt="MIT License" /> </a> <a href="https://github.com/sprklai/zenii/stargazers"> <img src="https://img.shields.io/github/stars/sprklai/zenii?style=flat-square" alt="GitHub Stars" /> </a> <a href="https://github.com/sprklai/zenii/pulls"> <img src="https://img.shields.io/badge/PRs-welcome-brightgreen?style=flat-square" alt="PRs Welcome" /> </a> <img src="https://img.shields.io/badge/i18n-8%20languages-2196F3?style=flat-square&logo=translate&logoColor=white" alt="i18n 8 languages" /> </p>"ChatGPT is a tab you open. Zenii is a capability your machine gains."
"Every tool you use is smart in isolation. Zenii makes them smart together."
Try It in 30 Seconds
curl -fsSL https://raw.githubusercontent.com/sprklai/zenii/main/install.sh | bash
zenii-daemon &
curl -s -X POST http://localhost:18981/chat \
-H "Content-Type: application/json" \
-d '{"session_id":"hello","prompt":"What can you do?"}' | jq .response
That's it. One binary, one address, AI everywhere. Now read on for what you can build with it.
Quick Start
Download the latest installer for your platform from GitHub Releases:
| Platform | Desktop App | CLI + Daemon + TUI |
|----------|------------|---------------------|
| Linux | .deb .rpm .AppImage | zenii-linux zenii-daemon-linux zenii-tui-linux |
| macOS | .dmg | zenii-macos-arm64 zenii-daemon-macos-arm64 zenii-tui-macos-arm64 |
| Windows | .msi .exe (NSIS) | zenii.exe zenii-daemon.exe zenii-tui.exe |
| ARM | -- | zenii-arm64 zenii-daemon-arm64 |
Or install via script (Linux/macOS):
# Download & install CLI + daemon + TUI
curl -fsSL https://raw.githubusercontent.com/sprklai/zenii/main/install.sh | bash
# Start the daemon
zenii-daemon &
# Your first AI request
curl -X POST http://localhost:18981/chat \
-H "Content-Type: application/json" \
-d '{"session_id": "hello", "prompt": "What can you do?"}'
Or use the desktop app, CLI, or TUI — they all talk to the same backend.
Why Zenii?
| Your pain | How Zenii fixes it |
|-----------|-------------------|
| AI tools are islands — ChatGPT, Telegram, scripts, cron all have separate memory and context | One shared brain: every interface, channel, and script shares the same memory, tools, and intelligence via localhost:18981 |
| Context resets every AI session | Semantic memory persists across sessions and survives restarts |
| AI can't do things, only talk | 18 built-in tools (15 base + 3 feature-gated): web search, file ops, content search, shell, memory, config, and more. Workflow pipelines and parallel delegation for complex tasks |
| Locked into one AI provider | 6 built-in providers, switch with one config change |
| AI tools are cloud-only | 100% local, zero telemetry, encrypted credential storage |
| "Works on my machine" for AI | Same binary on macOS, Linux, Windows — desktop, CLI, or daemon |
| Plugin systems require learning a framework | JSON-RPC over stdio — write plugins in Python, Go, JS, or anything |
| AI doesn't learn your patterns | Experimental self-evolving skills with human-in-the-loop approval |
| AI can't run tasks while you sleep | Built-in cron scheduler for autonomous recurring tasks |
One Address. Everything Connects.
Every AI tool you use today is an island. ChatGPT doesn't know what your Telegram bot discussed. Your Python scripts can't access the memory your CLI built. Your cron jobs reason in isolation.
Zenii changes that. One address — localhost:18981 — serves every interface, every channel,
every automation, every language. Desktop app, CLI, TUI, Telegram, Slack, Discord,
your Python scripts, your Go services, your shell one-liners — all sharing the same memory,
same tools, same AI providers, same learned behaviors.
Write a memory from Telegram. Recall it from Python. Schedule a task from the CLI. Get notified on Discord. Nothing is siloed. Everything converges.
What Zenii is NOT
- Not a chatbot wrapper — it's a full API backend with 114 routes
- Not Electron — native Tauri 2, under 20 MB
- Not a framework you learn — it's infrastructure you call via
curl - Not cloud-dependent — runs fully offline with Ollama
- Not opinionated about your stack — any language, any tool, JSON over HTTP
- Not a walled garden — built-in MCP server lets Claude Code, Cursor, and any MCP client use Zenii's tools natively. A2A Agent Card at
/.well-known/agent.json. See AGENT.md for integration details.
What Can I Automate?
# Schedule a daily morning briefing
curl -X POST http://localhost:18981/scheduler/jobs \
-H "Content-Type: application/json" \
-d '{"name":"briefing","schedule":{"Cron":{"expr":"0 9 * * *"}},"payload":{"AgentTurn":{"prompt":"Summarize system status and news"}}}'
# Store knowledge the AI should remember
curl -X POST http://localhost:18981/memory \
-H "Content-Type: application/json" \
-d '{"key":"deploy", "content":"Production DB is on port 5433, deploy via ssh prod"}'
# Ask a question that uses stored memory
curl -X POST http://localhost:18981/chat \
-H "Content-Type: application/json" \
-d '{"session_id":"ops", "prompt":"How do I deploy to production?"}'
# List what tools the agent has
curl http://localhost:18981/tools | jq '.[].name'
# Send a message via Telegram
curl -X POST http://localhost:18981/channels/telegram/send \
-H "Content-Type: application/json" \
-d '{"content":"Deploy complete", "recipient":"123456"}'
Follow the Memory
# 9 AM — Store from desktop/CLI
curl -X POST localhost:18981/memory \
-H "Content-Type: application/json" \
-d '{"key":"deploy","content":"Prod DB moved to port 5434"}'
# 10 AM — Your Python deploy script asks
curl -X POST localhost:18981/chat \
-H "Content-Type: application/json" \
-d '{"session_id":"deploy","prompt":"What port is prod DB on?"}'
# → "5434"
# 2 PM — Teammate asks via Telegram → same answer
# 3 PM — Cron job generates status report → includes the update
# One memory. Four interfaces. Zero configuration.
MCP: Use Zenii from Claude Code, Cursor, or Any AI Agent
Zenii ships a standalone MCP server binary. Any MCP-compatible client — Claude Code, Cursor, VS Code, Windsurf — can use Zenii's 18 tools natively.
Setup (Claude Code) — add to your project .mcp.json:
{
"mcpServers": {
"zenii": {
"command": "zenii-mcp-server",
"args": ["--transport", "stdio"]
}
}
}
Now Claude Code can search the web, read/write files, execute shell commands, store/recall persistent memory, and more — all through Zenii's tool registry with security policy enforcement.
What MCP clients see:
# Tools exposed (with zenii_ prefix):
zenii_system_info, zenii_web_search, zenii_file_read, zenii_file_write,
zenii_file_list, zenii_file_search, zenii_content_search, zenii_shell,
zenii_process, zenii_patch, zenii_memory, zenii_learn, zenii_config, ...
A2A Agent Card — other agents can discover Zenii at GET /.well-known/agent.json.
See AGENT.md for full integration docs, tool schemas, and multi-agent examples.
How It Fits
Zenii is a local AI infrastructure layer — not a chatbot, not a framework, not an API wrapper. It gives your machine a shared AI backend that any tool can call via HTTP.
The Self-Evolution Story
Most AI tools are static — they do exactly what they did on day one. OpenClaw self-modifies without asking. Zenii takes a third path:
- Zenii observes your patterns and preferences over time
- Zenii proposes skill modifications ("I notice you always want code reviews on Fridays. Want me to schedule that?")
- You approve or reject — like a PR from your AI
- Zenii learns — approved changes become permanent skills
Your AI gets smarter. You stay in control. No surprises.
Note: Self-evolution is experimental. Pattern detection quality depends on your LLM provider and how you use Zenii. It works best with capable models (GPT-4, Claude) and consistent usage pattern
Related Skills
tmux
345.4kRemote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
Hook Development
104.6kThis skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.
MCP Integration
104.6kThis skill should be used when the user asks to "add MCP server", "integrate MCP", "configure MCP in plugin", "use .mcp.json", "set up Model Context Protocol", "connect external service", mentions "${CLAUDE_PLUGIN_ROOT} with MCP", or discusses MCP server types (SSE, stdio, HTTP, WebSocket). Provides comprehensive guidance for integrating Model Context Protocol servers into Claude Code plugins for external tool and service integration.
Plugin Structure
104.6kThis skill should be used when the user asks to "create a plugin", "scaffold a plugin", "understand plugin structure", "organize plugin components", "set up plugin.json", "use ${CLAUDE_PLUGIN_ROOT}", "add commands/agents/skills/hooks", "configure auto-discovery", or needs guidance on plugin directory layout, manifest configuration, component organization, file naming conventions, or Claude Code plugin architecture best practices.
