Ramibot
RamiBot v3.8.0 is a local-first AI security operations platform integrating multi-LLM support, a dynamic red/blue team skill pipeline, MCP tool orchestration, Docker terminal access, Tor proxy management, and an auto-integrated Kali-based tool server (rami-kali) for controlled, extensible offensive and defensive workflows
Install / Use
/learn @RamiBotAI/RamibotQuality Score
Category
Development & EngineeringSupported Platforms
README
<p align="center"> <b>Execute. Analyze. Harden.</b> </p>
Key Features
RamiBot connects AI reasoning with real cybersecurity tools through a structured operations pipeline.
AI & Reasoning
-
🧠 Multi-provider LLM support
OpenAI, Anthropic, OpenRouter, LM Studio, and Ollama -
🧠 Skill Pipeline
Structured methodology: Recon → Exploit → Defense → Reporting -
🔐 Evidence-Locked Reporting
Prevents hallucinated CVEs, versions, or findings
Security Tool Integration
-
🧰 Real security tool execution via MCP
Integrates pentesting tools inside controlled environments -
🕵️ Rami-Kali MCP server
45+ pentesting tools available to the LLM
Infrastructure
-
🐳 Docker-integrated terminal
Run commands directly inside containerized environments -
🛑 Tool Approval Gate
Human approval before executing security tools -
📄 One-click PDF report export
Generate structured security reports instantly
RamiBot v3.8.0
A local-first AI chat interface for security operations. Supports multiple LLM providers, real-time streaming, MCP tool integration, a dynamic security skill system, Docker terminal access, Tor transparent proxy management, a persistent findings database, one-click PDF report export, a human-in-the-loop Tool Approval Gate that pauses execution before every MCP tool call, a global Evidence-Locked Reporting system that prevents the model from fabricating versions, CVEs, severity ratings, or security properties not explicitly present in tool output, a dedicated Burp Suite web assessment skill, a response language selector, Hermes tool chaining that detects and executes <tool_call> XML emitted by Llama/Hermes fine-tuned models, zsh shell with syntax highlighting and autosuggestions in the Docker terminal, proxychains4 proxy routing with ready-made Burp and Tor profiles, Service-Bound CVE Correlation that locks every CVE to its exact detected service via CPE data, a CVE Query Lock rule that prevents semantic drift when generating NVD lookup queries after service discovery, OAuth token support for OpenAI (ChatGPT Plus/Pro subscription via Codex CLI) and Anthropic (reserved, pending re-enablement), and one-command install and start scripts (install.sh / install.bat, start.sh / start.bat) that automate the full setup from a fresh system.
Demo
<table> <tr> <td align="center"> <a href="https://www.youtube.com/watch?v=AUpUkzdXBE0"> <img src="https://img.youtube.com/vi/AUpUkzdXBE0/maxresdefault.jpg" width="600" style="border-radius:10px;" alt="RamiBot in Action — AI-Assisted Pentesting Pipeline (Claude 4.5 + Rami-Kali)" /> </a> <br/> <b>AI-Assisted Pentesting Pipeline (Claude 4.5 + Rami-Kali)</b> </td> <td align="center"> <a href="https://www.youtube.com/watch?v=Ff8whSKAWQ4&t=39s"> <img src="https://img.youtube.com/vi/Ff8whSKAWQ4/maxresdefault.jpg" width="420" style="border-radius:10px;" alt="RamiBot AI Cybersecurity Demo | Port Scan → CVE Intelligence → Security Report" /> </a> <br/> <b>Port Scan → CVE Intelligence → Security Report (Local AI) Qwen 3.5 4B (Q8_0)</b> </td> </tr> </table>Installation
Requirements
- Python 3.9+
- Node.js 18+
- npm
- Docker Desktop (required — for the rami-kali MCP server, Docker terminal, and Tor features)
Windows Installer (easiest)
Download RamiBot-Setup-v3.8.0.exe from the Releases page, run it, and follow the wizard.
Before running the installer, make sure Docker Desktop is installed and running. The installer checks for it and will abort if Docker is not found.
The wizard checks for Python 3.9+ and Node.js 18+. If either is missing it downloads and installs them via their official wizards, then installs all Python and npm dependencies automatically. After the wizard completes, launch RamiBot from the desktop shortcut and add your API key(s) in Settings.
First launch: on the very first start, RamiBot automatically builds the rami-kali Docker image in the background. This can take a few minutes depending on your connection. Subsequent launches are instant — the image is already built and the container starts in seconds.
🎥 Full Installation Demo (Windows)
<p align="center"> <a href="https://www.youtube.com/watch?v=69mGhEFiuXU"> <img src="https://img.youtube.com/vi/69mGhEFiuXU/maxresdefault.jpg" width="600"style="border-radius:10px;"> </a> </p>One-command install (recommended)
git clone <repository-url>
cd ramibot
# Linux / macOS
bash install.sh
# Windows
install.bat
The script checks all prerequisites (Python, Node, Docker), installs missing ones automatically where possible, sets up the Python venv, installs npm dependencies, copies settings.example.json → settings.json, builds the rami-kali Docker image, and starts the container — all in one step. Running it again is safe; existing config is never overwritten.
After install, edit backend/settings.json and add your API key(s), then:
# Linux / macOS
bash start.sh
# Windows
start.bat
start.sh / start.bat launches backend + frontend in the background and opens http://localhost:5173 after 4 seconds. The rami-kali container is left running on shutdown (restart: unless-stopped).
Manual install (alternative)
git clone <repository-url>
cd ramibot
Backend:
cd backend
python -m venv .venv
# Windows
.venv\Scripts\activate
# macOS / Linux
source .venv/bin/activate
pip install -r requirements.txt
Frontend:
cd frontend
npm install
Run
One command (after install):
bash start.sh # Linux / macOS
start.bat # Windows
Two terminals (manual):
# Terminal 1
cd backend
python -m uvicorn main:app --reload --port 8000
# Terminal 2
cd frontend
npm run dev
Makefile (macOS/Linux):
make install
make dev
Open http://localhost:5173.
Overview
RamiBot is a self-hosted chat application built for security engineers who need a controllable, extensible interface between LLMs and operational tooling.
It does not depend on any cloud chat product. Conversations are stored locally in SQLite. Provider API keys are configured at runtime. All tool execution happens inside Docker containers.
The core differentiator is the skill pipeline: a prompt engineering system that detects the operational context from user input (reconnaissance, exploitation, defense, analysis, reporting), selects the appropriate skill, and injects structured methodology instructions into the system prompt before each LLM call. Team mode (red or blue) controls which skills are available and how the LLM frames its responses.
Who it is for:
- Security engineers running structured red team or blue team workflows
- Analysts who need LLM-assisted reasoning alongside real tool execution via MCP
- Researchers integrating local models (Ollama, LM Studio) into security workflows
- Teams that need full local data control with no cloud dependency for conversation history
Architecture
┌─────────────────────────────────────────────────────────────────┐
│ FRONTEND (React 19) │
│ Sidebar │ ChatPanel │ SettingsModal │ DockerTerminal │
│ Zustand State Store │
│ SSE consumer / fetch client (port 5173) │
└───────────────────────────┬─────────────────────────────────────┘
│ HTTP / SSE
┌───────────────────────────▼─────────────────────────────────────┐
│ BACKEND (FastAPI) │
│ │
│ /api/chat/stream ──► SkillPipeline ──► LLM Adapter │
│ │ │ │
│ PromptComposer httpx (SSE) │
│ │ │ │
│ System Prompt Provider API │
│ │
│ Tool call detected ──► MCPClient ──► rami-kali MCP server │
│ (auto-configured) (docker exec stdio) │
│ ──► MCP Server (stdio/HTTP) │
│ Tool result ──────────────────────► LLM follow-up │
│ │
│ /api/terminal/* ──► TerminalSession ──► docker exec │
│ /api/docker/tor ──► tor_start/stop ──► iptables (container) │
│
Related Skills
healthcheck
349.0kHost security hardening and risk-tolerance configuration for OpenClaw deployments
node-connect
349.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
prose
349.0kOpenProse VM skill pack. Activate on any `prose` command, .prose files, or OpenProse mentions; orchestrates multi-agent workflows.
frontend-design
109.4kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
