OnionClaw
OnionClaw™ — OpenClaw skill for full Tor dark web access: search .onion sites, fetch hidden services, rotate identity, run OSINT pipeline, craw.
Install / Use
/learn @JacobJandon/OnionClawQuality Score
Category
Development & EngineeringSupported Platforms
README
OnionClaw 🧅
by JacobJandon
<p align="center"> <img src="OnionClaw-logo.png" alt="OnionClaw logo" width="200"/> </p>OpenClaw skill + standalone tool — full Tor / dark web access for AI agents
OnionClaw gives AI agents full access to the Tor network and .onion hidden services. It runs as an OpenClaw skill (drop-in, zero config beyond a .env file) and also works standalone from any terminal.
Based on the SICRY engine — 18 dark web search engines, Robin OSINT pipeline, four LLM analysis modes.
# As an OpenClaw skill:
cp -r OnionClaw ~/.openclaw/skills/onionclaw
# → agent now has 7 dark web commands available in every session
# Standalone:
python3 check_tor.py # verify Tor
python3 search.py --query "ransomware healthcare"
python3 pipeline.py --query "acme.com data leak" --mode corporate
⚠️ The Rabbit Hole
Autonomous agents paired with the Tor network will be one of the most dangerous automation stacks on the internet within the next five years. OnionClaw is living proof that the rabbit hole goes deeper than most people think.
This tool is built for legitimate OSINT, threat intelligence, and security research. But the same primitives — anonymous routing, bulk scraping, AI-driven synthesis, zero-attribution browsing, automated identity rotation — are precisely what make this combination genuinely dangerous in the wrong hands.
This is not a warning tucked in fine print. It is the whole point of writing it down openly.
What the stack enables — the full map
| Use case | What it looks like |
|---|---|
| Dark-web crawling | Automated, headless spidering of .onion services at scale — forums, paste sites, markets, leak boards — with full identity rotation between every request. No human ever touches a keyboard. |
| Threat intelligence | Continuous monitoring of ransomware group blogs, initial access broker ads, CVE exploit drops, and actor chatter long before it surfaces on clearnet feeds. |
| Marketplace monitoring | Price tracking, stock alerts, vendor reputation scraping, and availability checks across darknet markets — the same logic a researcher uses to track fentanyl price trends is the same logic a supplier uses to undercut competitors. |
| Credential surveillance | Watching paste boards, breach dumps, and forum leaks for specific email domains, API keys, SSH keys, or internal hostnames the moment they appear — at a scale no human analyst can match. |
| Deanonymisation research | Cross-correlating .onion service metadata with clearnet traces, timing attacks, correlation of writing style and PGP keys — used both by law enforcement hunting criminals and by threat actors hunting journalists and dissidents. |
| Criminal automation | Autonomous agents placing orders, posting ads, messaging vendors, managing mule accounts, draining wallets — an entire criminal operation running without a human ever in the loop. |
| Disinformation infrastructure | Coordinated persona networks on hidden boards, fabricated document drops timed to bleed into legitimate OSINT pipelines, synthetic intelligence that reads real but originates from nowhere. |
| Zero-day brokerage | Automated monitoring of exploit vendor channels, private CVE auction boards, and vulnerability markets — buy-side and sell-side intelligence gathered faster than any human analyst. |
The ugly side
The 2026 internet is already at the edge of this. Within five years, AI agents that can:
- Browse anonymously through rotating Tor circuits with no persistent identity
- Understand context well enough to navigate dark web UIs, CAPTCHA logic, and forum culture without hardcoded selectors
- Act autonomously — search, buy, post, exfiltrate, rotate — in closed loops with no human confirmation step
- Self-orchestrate across dozens of simultaneous Tor identities on parallel threads
…represent a qualitative shift from human criminals using tools to autonomous criminal infrastructure operating at machine speed with no human in the loop. The bottleneck has always been human attention. Remove it and the scaling properties of dark web operations change completely.
OnionClaw demonstrates all four of those primitives working together today. The full pipeline.py step — query refinement → multi-engine search → result filtering → batch scrape → LLM synthesis → identity rotation — is a complete autonomous dark web intelligence loop. Remove the OSINT framing and it is equally a complete autonomous dark web operation loop. The code is the same either way.
Why this is written explicitly
Security tools that pretend the dual-use problem does not exist are more dangerous than ones that name it directly. If you are building on top of OnionClaw:
- Know what you are building. The pipeline does not know if the query is
"acme.com credential leak"for a pentest or"rival vendor SSH keys"for espionage. - Know your jurisdiction. Automated access to dark web content and
.onionservices may be illegal in your country regardless of intent or findings. - Tor is not legal protection. It is operational security. The two are different things with very different limits.
- AI + Tor + autonomy is not a theoretical threat. It is a present capability. This repo is one of many signals that the tooling is ready.
OnionClaw is published for defensive research, red-team engagements, and threat intelligence work. The code does not know the difference between those uses and their inverse. You do. Build accordingly.
Contents
- ⚠️ The Rabbit Hole
- What OnionClaw does
- Requirements
- Install as OpenClaw skill
- Standalone install
- Configuration
- All seven commands
- Investigation flows
- Analysis modes
- Architecture
- Troubleshooting
- Credits
What OnionClaw does
Seven commands expose the complete Tor OSINT toolkit:
| Command | What it does |
|---|---|
| check_tor.py | Verify Tor is active, show current exit IP |
| renew.py | Rotate Tor circuit — new exit node, new identity |
| check_engines.py | Ping all 18 dark web search engines, show latency |
| search.py | Search up to 18 engines simultaneously, deduplicated results |
| fetch.py | Fetch any .onion or clearnet URL through Tor |
| ask.py | LLM OSINT analysis of scraped content (4 modes) |
| pipeline.py | Full Robin pipeline: refine → search → filter → scrape → analyse |
Requirements
- Python 3.10+
- Tor running locally (SOCKS proxy on
127.0.0.1:9050) - pip packages:
requests[socks] beautifulsoup4 python-dotenv stem - LLM key (optional — only needed for
ask.pyandpipeline.pyanalysis step)
Install Tor
Linux (Debian/Ubuntu):
apt install tor && tor &
macOS:
brew install tor && tor &
With control port (needed for renew.py):
cat > /tmp/onionclaw_tor.conf << 'EOF'
SocksPort 9050
ControlPort 9051
CookieAuthentication 1
DataDirectory /tmp/tor_data
EOF
tor -f /tmp/onionclaw_tor.conf &
Then set TOR_DATA_DIR=/tmp/tor_data in .env.
Install Python packages
pip install requests[socks] beautifulsoup4 python-dotenv stem
Install as OpenClaw skill
- Clone or copy this repo into your OpenClaw skills directory:
# Option A — clone directly
git clone https://github.com/JacobJandon/OnionClaw ~/.openclaw/skills/onionclaw
# Option B — copy local folder
cp -r OnionClaw ~/.openclaw/skills/onionclaw
- Configure
.envin the skill folder:
cp ~/.openclaw/skills/onionclaw/.env.example ~/.openclaw/skills/onionclaw/.env
nano ~/.openclaw/skills/onionclaw/.env # add LLM key if desired
- Start a new OpenClaw session — the skill loads automatically on startup. OpenClaw includes
onionclawin the agent context whenever the user asks about dark web topics.
Verify OpenClaw can see the skill:
openclaw skills list
# → onionclaw 🧅 Search the Tor dark web...
OpenClaw trigger phrases:
- "search the dark web for …"
- "investigate this .onion site …"
- "check if my data appeared on the dark web"
- "find ransomware leaks related to …"
- "fetch this .onion URL …"
- "run a Tor OSINT investigation on …"
After install, start a new session — existing sessions will not pick up the new skill.
Standalone install
No OpenClaw required. Every script runs directly from a terminal:
git clone https://github.com/JacobJandon/OnionClaw
cd OnionClaw
pip install requests[socks] beautifulsoup4 python-dotenv stem
cp .env.example .env
# Edit .env — add LLM key if desired (optional for most commands)
Configuration
Copy .env.example to .env and fill in what you need:
# ── Tor ────────────────────────────────────────────────────────────────
TOR_SOCKS_HOST=127.0.0.1
TOR_SOCKS_PORT=9050
TOR_CONTROL_HOST=127.0.0.1
TOR_CONTROL_PORT=9051
# TOR_CONTROL_PASSWORD=your_password # only if HashedControlPassword in torrc
# TOR_DATA_DIR=/tmp/tor_data # DataDirectory path for cookie auth
# ── LLM (needed only for ask.py and pipeline.py analysis step) ──────────
LLM_PROVIDER=openai # openai | anthropic | gemini | ollama | llamacpp
OPENA
