Cheetahclaws
CheetahClaws (Nano Claude Code): A Fast, Easy-to-Use, Python-Native Personal AI Assistant for Any Model, Inspired by OpenClaw and Claude Code, Built to Work for You Autonomously 24/7.
Install / Use
/learn @SafeRL-Lab/CheetahclawsQuality Score
Category
Development & EngineeringSupported Platforms
README
English | 中文 | Français | 한국어 | 日本語 | Deutsch | Português
<br> <div align="center"> <a href="[https://github.com/SafeRL-Lab/Robust-Gymnasium](https://github.com/SafeRL-Lab/clawspring)"> <img src="docs/logo-5.png" alt="Logo" width="280"> </a> <h2 align="center" style="font-size: 30px;"><strong><em>CheetahClaws (Nano Claude Code)</em></strong>: A Fast, Easy-to-Use, Python-Native Personal AI Assistant for Any Model, Inspired by OpenClaw and Claude Code, Built to Work for You Autonomously 24/7</h2> <p align="center"> <a href="https://github.com/chauncygu/collection-claude-code-source-code">The newest source of Claude Code</a> · <a href="https://github.com/SafeRL-Lab/clawspring/issues">Issue</a> · <a href="https://deepwiki.com/SafeRL-Lab/clawspring">Brief Intro</a> </p> </div> <div align=center> <img src="https://github.com/SafeRL-Lab/clawspring/blob/main/docs/demo.gif" width="850"/> </div> <div align=center> <center style="color:#000000;text-decoration:underline">Task Excution</center> </div><div align=center> <img src="https://github.com/SafeRL-Lab/clawspring/blob/main/docs/brainstorm_demo.gif" width="850"/> </div> <div align=center> <center style="color:#000000;text-decoration:underline">Brainstorm Mode: Multi-Agent Brainstorm</center> </div>
<div align=center> <img src="https://github.com/SafeRL-Lab/clawspring/blob/main/docs/proactive_demo.gif" width="850"/> </div> <div align=center> <center style="color:#000000;text-decoration:underline">Proactive Mode: Autonomous Agent</center> </div>
<div align=center> <img src="https://github.com/SafeRL-Lab/clawspring/blob/main/docs/ssj_demo.gif" width="850"/> </div> <div align=center> <center style="color:#000000;text-decoration:underline">SSJ Developer Mode: Power Menu Workflow</center> </div>
<div align=center> <img src="https://github.com/SafeRL-Lab/clawspring/blob/main/docs/telegram_demo.gif" width="850"/> </div> <div align=center> <center style="color:#000000;text-decoration:underline">Telegram Bridge: Control cheetahclaws from Your Phone</center> </div>
🔥🔥🔥 News (Pacific Time)
-
Apr 06, 2026 (v3.05.53): Telegram interactive menus,
/imgalias,/voice device, OpenAI/Gemini vision support- Telegram interactive menus fixed — slash commands with interactive input (e.g.
/ollama,/permission,/checkpoint) were blocking the Telegram poll loop, making it impossible to respond to the menu prompts. Slash commands now run in a daemon thread (like regular queries), keeping the poll loop free. All interactive menus (ask_input_interactive) work correctly over Telegram. /imgalias —/imgis now an alias for/image, for faster clipboard-image workflows./voice device— new subcommand to list all available input microphones and select one interactively. The selected device index is persisted in the session config and shown in/voice status. Useful on systems with multiple audio interfaces (e.g. USB headset + built-in mic).- Vision support for OpenAI / Gemini models —
/img(and/image) now sends images in the OpenAI multipartimage_urlformat to cloud vision models (GPT-4o, Gemini 2.0 Flash, etc.), in addition to the existing Ollama native format. No configuration change needed — the correct format is selected automatically based on the active provider. - Bug fix: threading race condition —
_in_telegram_turnis now tracked viathreading.local()per-slash-runner thread instead of a shared config key, eliminating a race condition that could corrupt the flag when a regular message arrived while an interactive slash command was waiting for input.
- Telegram interactive menus fixed — slash commands with interactive input (e.g.
-
Apr 06, 2026 (v3.05.52): Checkpoint system, plan mode, compact, and utility commands, support MiniMax Models, fix telegram bugs
- Checkpoint system (
checkpoint/package): auto-snapshots conversation state and file changes after every turn./checkpointlists all snapshots;/checkpoint <id>rewinds both files and conversation history to any previous state;/checkpoint clearremoves all snapshots for the session./rewindis an alias. 100-snapshot sliding window; initial snapshot captured at session start. Throttling: skips when nothing changed. File backups use copy-on-write; snapshots capture post-edit state. - Plan mode:
/plan <desc>enters a read-only analysis mode — Claude may only read the codebase and write to a dedicated plan file (.nano_claude/plans/<session_id>.md). All other writes are silently blocked with a helpful message./planshows the current plan;/plan doneexits plan mode and restores original permissions;/plan statusreports whether plan mode is active. Two new agent tools —EnterPlanModeandExitPlanMode— let Claude autonomously enter and exit plan mode for complex multi-file tasks; both are auto-approved in all permission modes. /compact [focus]: manually trigger conversation compaction at any time. An optional focus string guides the LLM summarizer on what context to preserve. Auto-compact and manual compact both restore plan file context after compaction.- Utility commands:
/initcreates aCLAUDE.mdtemplate in the current directory;/export [filename]exports the conversation as Markdown (default) or JSON;/copycopies the last assistant response to the clipboard (Windows/macOS/Linux);/statusshows version, model, provider, permissions, session ID, token usage, and context %;/doctordiagnoses installation health (Python version, git, API key + live connectivity test, optional deps, CLAUDE.md presence, checkpoint disk usage, permission mode).
- Checkpoint system (
-
Apr 06, 2026 (v3.05.51): Project renamed from Nano Claude Code to CheetahClaws
- The project has been rebranded from Nano Claude Code to CheetahClaws — a more distinctive name that captures the spirit of the tool: a sharp, agile coding assistant. The
Clin CheetahClaws is a subtle nod to Claude. - CLI command:
nano_claude→cheetahclaws - PyPI package:
nano-claude-code→cheetahclaws - Config directory:
~/.nano_claude/→~/.clawnest/→~/.cheetahclaws/ - Main entry point:
nano_claude.py→cheetahclaws.py - All documentation, GitHub URLs, and internal references updated accordingly.
- Added CheetahClaws vs OpenClaw comparison section to README.
- The project has been rebranded from Nano Claude Code to CheetahClaws — a more distinctive name that captures the spirit of the tool: a sharp, agile coding assistant. The
For more news, see here
CheetahClaws
CheetahClaws: A Lightweight and Easy-to-Use Python Reimplementation of Claude Code Supporting Any Model, such as Claude, GPT, Gemini, Kimi, Qwen, Zhipu, DeepSeek, MiniMax, and local open-source models via Ollama or any OpenAI-compatible endpoint.
Content
- Why CheetahClaws
- CheetahClaws vs OpenClaw
- Features
- Supported Models
- Installation
- Usage: Closed-Source API Models
- Usage: Open-Source Models (Local)
- Model Name Format
- CLI Reference
- Slash Commands (REPL)
- Configuring API Keys
- Permission System
- Built-in Tools
- Memory
- Skills
- Sub-Agents
- MCP (Model Context Protocol)
- Plugin System
- AskUserQuestion Tool
- Task Management
- Voice Input
- Brainstorm
- SSJ Developer Mode
- Telegram Bridge
- Proactive Background Monitoring
- Checkpoint System
- Plan Mode
- Context Compression
- Diff View
- CLAUDE.md Support
- Session Management
- Cloud Sync (GitHub Gist)
- Project Structure
- FAQ
Why CheetahClaws
Claude Code is a powerful, production-grade AI coding assistant — but its source code is a compiled, 12 MB TypeScript/Node.js bundle (~1,300 files, ~283K lines). It is tightly coupled to the Anthropic API, hard to modify, and impossible to run against a local or alternative model.
CheetahClaws reimplements the same core loop in ~10K lines of readable Python, keeping everything you need and dropping what you don't. See here for more detailed analysis (CheetahClaws v3.03), English version and Chinese version
At a glance
| Dimension | Claude Code (TypeScript) | CheetahClaws (Python) | |-----------|--------------------------|---------------------------| | Language | TypeScript + React/Ink | Python 3.8+ | | Source files | ~1,332 TS/TSX files | 51 Python files | | Lines of code | ~283K | ~12K | | Built-in tools | 44+ | 27 | | Slash commands | 88 | 36 | | Voice input | Proprietary Anthropic WebSocket (OAuth required) | Local Whisper / OpenAI API — works offline, no subscription | | Model providers | Anthropic only | 8+ (Anthropic · OpenAI · Gemini · Kimi · Qwen · D
