Airecon
AIRecon is an autonomous cybersecurity agent that combines a self-hosted Large Language Model (Ollama) with a Kali Linux Docker sandbox and a Textual TUI. It is designed to automate security assessments, penetration testing, and bug bounty reconnaissance — without any API keys or cloud dependency.
Install / Use
/learn @pikpikcu/AireconREADME
AIRecon is an autonomous penetration testing agent that combines a self-hosted Ollama LLM with a Kali Linux Docker sandbox, native Caido proxy integration, a structured RECON → ANALYSIS → EXPLOIT → REPORT pipeline, and a real-time Textual TUI — completely offline, no API keys required.

Why AIRecon?
Commercial API-based models (OpenAI GPT-4, Claude, Gemini) become prohibitively expensive for recursive, autonomous recon workflows that can require thousands of LLM calls per session.
AIRecon is built 100% for local, private operation.
| Feature | AIRecon | Cloud-based agents | |---------|---------|-------------------| | API keys required | No | Yes | | Target data sent to cloud | No | Yes | | Works offline | Yes | No | | Caido integration | Native | None | | Session resume | Yes | Varies |
- Privacy First — Target intelligence, tool output, and reports never leave your machine.
- Caido Native — 5 built-in tools: list, replay, automate (
§FUZZ§), findings, scope. - Full Stack — Kali sandbox + browser automation + custom fuzzer + Schemathesis API fuzzing + Semgrep SAST.
- Skills Knowledge Base — 57 built-in skill files, 289 keyword → skill auto-mappings. Extended by airecon-skills — a community skill library with 57 additional CLI-based playbooks for CTF, bug bounty, and pentesting.
Pipeline
RECON → ANALYSIS → EXPLOIT → REPORT
Each phase has specific objectives, recommended tools, and automatic transition criteria. Phase enforcement is soft — the agent is guided but never blocked. Checkpoints run every 5 (phase eval), 10 (self-eval), and 15 (context compression) iterations.
Model Requirements
AIRecon requires a model with extended thinking (<think> blocks) and reliable tool-calling capabilities. Capabilities are auto-detected via ollama show metadata.
⚠️ Tool calling support is REQUIRED. The model must support native function/tool calling. Models without this capability will be unable to execute any tools (http_observe, execute, browser actions, etc.), making AIRecon completely non-functional.
Recommended minimum: 8B-9B parameters. Models below 8B are technically usable but strongly discouraged — they frequently hallucinate tool output, invent CVEs, skip scope rules, and produce unreliable tool calls.
| Model | Pull | VRAM | Notes |
|-------|------|------|-------|
| Qwen3.5 122B | ollama pull qwen3.5:122b | 48+ GB | Best quality, most reliable |
| Qwen3.5 35B | ollama pull qwen3.5:35b | 20 GB | Recommended for most users |
| Qwen3.5 35b | ollama pull qwen3.5:35b-a3b | 16 GB | MoE — lower VRAM |
| Qwen3.5 9B | ollama pull qwen3.5:9b | 6 GB | Minimum viable — expect frequent errors |
Model size guidance:
- ≥32B: Reliable for full recon pipelines, good tool calling accuracy
- 8B-14B: Usable for simple tasks, expect 20-40% tool call errors and hallucinations
- <8B: Technically works but produces unreliable results — not recommended for serious testing
Known issues: DeepSeek R1 produces incomplete function calls. Models < 8B lack reliable tool calling support.
Installation
Prerequisites: Python 3.12+, Docker 20.10+, Ollama (running), git, curl
One-line install (recommended)
curl -fsSL https://raw.githubusercontent.com/pikpikcu/airecon/refs/heads/main/install.sh | bash
The script auto-detects remote vs local mode, installs Poetry if missing (via official installer — no system package conflicts), builds the wheel, and installs to ~/.local/bin.
Manual install (from source)
git clone https://github.com/pikpikcu/airecon.git
cd airecon
./install.sh
# Add to ~/.bashrc or ~/.zshrc if needed
export PATH="$HOME/.local/bin:$PATH"
airecon --version
Configuration
Config file: ~/.airecon/config.yaml (auto-generated on first run).
# ======================================
# Ollama Connection
# ======================================
# Ollama API endpoint. For remote servers use http://IP:11434
ollama_url: "http://127.0.0.1:11434"
# Model to use. Recommended: qwen3.5:122b for best reasoning
ollama_model: "qwen3.5:122b"
# Total request timeout (seconds). 300s = 5 min. Increase for slow remote servers.
ollama_timeout: 300.0
# Per-chunk stream timeout (seconds). 180s for 122B model prefill over network.
ollama_chunk_timeout: 180.0
# ======================================
# Ollama Model Settings
# ======================================
# Context window size. 131072 = 128K (full). Reduce to 65536 if VRAM < 24GB.
ollama_num_ctx: 131072
# Context for CTF/summary mode. 65536 = 64K (half VRAM usage).
ollama_num_ctx_small: 65536
# LLM temperature. 0.15 = deterministic. Range: 0.0–0.3 for pentesting.
ollama_temperature: 0.15
# Max tokens to generate. 32768 for detailed tool responses.
ollama_num_predict: 32768
# Enable extended thinking mode (for Qwen3.5+).
ollama_enable_thinking: true
# Auto-detected: model supports <think> blocks.
ollama_supports_thinking: true
# Auto-detected: model supports native tool calling.
ollama_supports_native_tools: true
# Max concurrent Ollama requests. Keep 1 for 122B models.
ollama_max_concurrent_requests: 1
# Protect first N tokens from KV eviction. 8192 = protect system prompt (~8K tokens).
ollama_num_keep: 8192
# Prevent repetition loops. 1.05 = mild. Range: 1.0–1.2.
ollama_repeat_penalty: 1.05
| Key | Default | Notes |
|-----|---------|-------|
| ollama_temperature | 0.15 | Keep 0.1–0.2. Higher values cause hallucination. |
| ollama_num_ctx | 131072 | Reduce to 32768 if VRAM is limited. |
| ollama_keep_alive | "60m" | How long to keep model in VRAM. |
| deep_recon_autostart | true | Bare domain inputs auto-expand to full recon. |
| allow_destructive_testing | false | Unlocks aggressive modes (SQLi confirm, RCE chains). |
| command_timeout | 900.0 | Max seconds per shell command in Docker. |
| vuln_similarity_threshold | 0.7 | Jaccard dedup threshold for vulnerabilities. |
Remote Ollama:
"ollama_url": "http://192.168.1.100:11434"
"ollama_model": "qwen3:32b"
Usage
airecon start # start TUI
airecon start --session <session_id> # resume session
Example prompts:
# Full pipeline
full recon on example.com
pentest https://api.example.com
# Specific tasks
find subdomains of example.com
scan ports on 10.0.0.1
check for XSS on https://example.com/search
test SQL injection on https://example.com/api/login parameter: username
run schemathesis on https://example.com/openapi.json
# Authenticated testing
login to https://example.com/login with admin@example.com / password123 then test for IDOR
test https://app.example.com with TOTP: JBSWY3DPEHPK3PXP
# Multi-agent
spawn an XSS specialist on https://example.com/search
run parallel recon on: example.com, sub.example.com, api.example.com
# Caido
replay request #1234 with a modified Authorization header
use Caido to fuzz the username parameter in request #45 with §FUZZ§ markers
Workspace
workspace/<target>/
├── output/ # Raw tool outputs (nmap, httpx, nuclei, subfinder, ...)
├── tools/ # AI-generated exploit scripts (.py, .sh)
└── vulnerabilities/ # Verified vulnerability reports (.md)
Sessions persist at ~/.airecon/sessions/<session_id>.json — subdomains, ports, technologies, URLs, vulnerabilities (Jaccard dedup), auth tokens, and completed phases.
Troubleshooting
Ollama OOM / HTML error page — Most common on long sessions or large models near VRAM limits.
sudo systemctl restart ollama
{ "ollama_num_ctx": 32768, "ollama_num_ctx_small": 16384, "ollama_num_predict": 8192 }
Agent loops/stalls — Usually a reasoning failure. Try a larger model, or reduce ollama_temperature to < 0.2.
Docker sandbox not starting:
docker build -t airecon-sandbox airecon/containers/kali/
Caido connection refused — Caido must be running before AIRecon. Default: 127.0.0.1:48080.
PATH not found after install:
export PATH="$HOME/.local/bin:$PATH" && source ~/.zshrc
Documentation
License
MIT License. See LICENSE for details.
Disclaimer
AIRecon is built strictly for educational purposes, ethical hacking, and authorized security assessments. Any actions related to the material in this tool are solely your responsibility. Do not use this tool on systems or networks you do not own or have explicit permission to test.
Related Skills
node-connect
348.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
348.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
348.5kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
