Crw
Fast, lightweight Firecrawl alternative in Rust. Web scraper, crawler & search API with MCP server for AI agents. Drop-in Firecrawl-compatible API (/v1/scrape, /v1/crawl, /v1/search). 2.3x faster than Tavily, 1.5x faster than Firecrawl in 1K-URL benchmarks. 6 MB RAM, single binary. Self-host or use managed cloud.
Install / Use
/learn @us/CrwQuality Score
Category
Development & EngineeringSupported Platforms
README
Don't want to self-host? fastcrw.com — managed cloud with global proxy network, auto-scaling, web search, and dashboard. Same API, zero infra. Get 500 free credits →
Scrape any URL in one command
crw example.com
# Example Domain
This domain is for use in illustrative examples in documents.
You may use this domain in literature without prior coordination or asking for permission.
[More information...](https://www.iana.org/domains/example)
Markdown output, no server, no config. Install →
Install
CLI (crw) — scrape URLs from your terminal
# Homebrew:
brew install us/crw/crw
# APT (Debian/Ubuntu):
curl -fsSL https://apt.fastcrw.com/gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/crw.gpg
echo "deb [signed-by=/usr/share/keyrings/crw.gpg] https://apt.fastcrw.com stable main" | sudo tee /etc/apt/sources.list.d/crw.list
sudo apt update && sudo apt install crw
# One-line install (auto-detects OS & arch):
curl -fsSL https://raw.githubusercontent.com/us/crw/main/install.sh | CRW_BINARY=crw sh
# Cargo:
cargo install crw-cli
MCP Server (crw-mcp) — give AI agents web scraping tools
# Homebrew:
brew install us/crw/crw-mcp
# APT (Debian/Ubuntu — add repo once, see CLI section above):
sudo apt install crw-mcp
# One-line install:
curl -fsSL https://raw.githubusercontent.com/us/crw/main/install.sh | sh
# npm (zero install):
npx crw-mcp
# Python:
pip install crw
# Cargo:
cargo install crw-mcp
# Docker:
docker run -i ghcr.io/us/crw crw-mcp
Listed on the MCP Registry
API Server (crw-server) — Firecrawl-compatible REST API
# Homebrew:
brew install us/crw/crw-server
# APT (Debian/Ubuntu — add repo once, see CLI section above):
sudo apt install crw-server
# One-line install:
curl -fsSL https://raw.githubusercontent.com/us/crw/main/install.sh | CRW_BINARY=crw-server sh
# Cargo:
cargo install crw-server
# Docker:
docker run -p 3000:3000 ghcr.io/us/crw
Choose Your Mode
| | CLI (crw) | MCP (crw-mcp) | Server (crw-server) | Docker |
|---|---|---|---|---|
| Use case | Terminal scraping | AI agent tools | REST API backend | Containerized deploy |
| Server needed? | No | No (embedded) | Yes (:3000) | Yes (:3000) |
| JS rendering | --js (auto-detect)^1 | Auto-detect + auto-download^1 | crw-server setup^2 | Included (sidecar) |
| Single URL scrape | Yes | Yes | Yes | Yes |
| Web search | — | Cloud only^3 | — | — |
| Async crawl | — | Yes | Yes | Yes |
| URL mapping | — | Yes | Yes | Yes |
| REST API | — | — | Yes (Firecrawl-compat) | Yes (Firecrawl-compat) |
| MCP protocol | — | Yes (stdio + HTTP) | HTTP only | HTTP only |
| Output | stdout | MCP protocol | JSON responses | JSON responses |
| LLM extraction | — | Yes | Yes | Yes |
^1 CLI + MCP: Same auto-detect chain — LightPanda in PATH →
~/.crw/lightpanda(auto-downloads if missing) → Chrome/Chromium on system → LightPanda Docker container. Falls back to HTTP-only if no browser found. CLI requires--jsflag; MCP activates automatically. Both respectCRW_CDP_URLenv var for manual override.^2 Server:
crw-server setupdownloads LightPanda and createsconfig.local.toml. Start LightPanda separately before running the server. With Docker Compose, LightPanda runs as a sidecar automatically.^3 Web search:
crw_searchis available only when connected to fastcrw.com cloud (CRW_API_URLset). In embedded/self-hosted mode, usecrw_mapfor site discovery instead.
Quick Start
CLI:
crw example.com # markdown to stdout
crw example.com --format html # HTML output
crw example.com --format links # extract all links
crw example.com --js # with JS rendering (auto-detects browser)
crw example.com --css 'article' --raw # extract specific elements
MCP (AI agents — recommended):
# Local (embedded — no server needed):
claude mcp add crw -- npx crw-mcp
# Cloud (fastcrw.com — includes web search):
claude mcp add -e CRW_API_URL=https://fastcrw.com/api -e CRW_API_KEY=your-key crw -- npx crw-mcp
Local mode gives you
crw_scrape,crw_crawl,crw_maptools. Cloud mode addscrw_searchfor web search powered by fastcrw.com. For Cursor, Windsurf, Cline, and other MCP clients, see MCP Server.
Self-hosted server:
crw-server # start API server on :3000
crw-server setup # optional: set up JS rendering (downloads LightPanda)
Docker Compose (with JS rendering):
docker compose up
Scrape a page via API:
curl -X POST http://localhost:3000/v1/scrape \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com"}'
{
"success": true,
"data": {
"markdown": "# Example Domain\nThis domain is for use in ...",
"metadata": {
"title": "Example Domain",
"sourceURL": "https://example.com",
"statusCode": 200,
"elapsedMs": 32
}
}
}
Cloud (no setup):
curl -X POST https://fastcrw.com/api/v1/scrape \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com"}'
Get your API key at fastcrw.com — 500 free credits included.
Why CRW?
CRW gives you Firecrawl's API with a fraction of the resource usage. No runtime dependencies, no Redis, no Node.js — just a single binary you can deploy anywhere.
| Metric | CRW (self-hosted) | fastcrw.com (cloud) | Firecrawl | Tavily | Crawl4AI | |---|---|---|---|---|---| | Coverage (1K URLs) | 92.0% | 92.0% | 77.2% | — | — | | Avg Scrape Latency | 833ms | 833ms | 4,600ms | — | — | | Avg Search Latency | 880ms | 880ms | 954ms | 2,000ms | — | | Search Win Rate | 73/100 | 73/100 | 25/100 | 2/100 | — | | Idle RAM | 6.6 MB | 0 (managed) | ~500 MB+ | — (cloud) | — | | Cold start | 85 ms | 0 (always-on) | 30–60 s | — | — | | Self-hosting | Single binary | — | Multi-container | No | Python + Playwright | | Cost / 1K scrapes | $0 (self-hosted) | From $13/mo | $0.83–5.33 | — | $0 | | License | AGPL-3.0 | Managed | AGPL-3.0 | Proprietary | Apache-2.0 |
<details> <summary><b>Full benchmark details</b></summary>CRW vs Firecrawl — Tested on Firecrawl scrape-content-dataset-v1 (1,000 real-world URLs, JS rendering enabled):
- CRW covers 92% of URLs vs Firecrawl's 77.2% — 15 percentage points higher
- CRW is 5.5x faster on average (833ms vs 4,600ms)
- CRW uses ~75x less RAM at idle (6.6 MB vs ~500 MB+)
- Firecrawl requires 5 containers (Node.js, Redis, PostgreSQL, RabbitMQ, Playwright) — CRW is a single binary
Firecrawl independent review — Scrapeway benchmark: 64.3% success rate, $5.11/1K scrapes, 0% on LinkedIn/Twitter.
Resource comparison:
| Metric | CRW | Firecrawl | |---|---|---| | Min RAM | ~7 MB | 4 GB | | Recommended RAM | ~64 MB (under load) | 8–16 GB | | Docker images | single ~8 MB binary | ~2–3 GB total | | Cold start | 85 ms | 30–60 seconds | | Containers needed | 1 (+optional sidecar) | 5 |
</details>Features
- MCP server — built-in stdio + HTTP transport for Claude Code, Cursor, Windsurf, and any MCP client
- Firecrawl-compatible API — same endpoint family and familiar request/response ergonomics
- 6 output formats — markdown, HTML, cleaned HTML, raw HTML, plain text, links, structured JSON
- **LLM structured
Related Skills
himalaya
352.9kCLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language).
node-connect
352.9kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
taskflow
352.9kname: taskflow description: Use when work should span one or more detached tasks but still behave like one job with a single owner context. TaskFlow is the durable flow substrate under authoring layer
prose
352.9kOpenProse VM skill pack. Activate on any `prose` command, .prose files, or OpenProse mentions; orchestrates multi-agent workflows.
