Autorecon
No description available
Install / Use
/learn @00xmora/AutoreconREADME
AutoRecon
AutoRecon is a powerful automated reconnaissance tool designed to simplify and streamline the process of subdomain enumeration, URL discovery, web content analysis, and initial vulnerability scanning. This simplified version focuses on core tool orchestration, removing direct API key management for a leaner setup. It intelligently integrates with essential open-source tools to provide a comprehensive and organized workflow.
Features
Subdomain Enumeration:
- Passive Enumeration: Leverages tools like
amass,subfinder, andsublist3r. - Active Enumeration: Performs DNS brute-forcing with
dnsreconand virtual host enumeration withffuf.
Live Domain Filtering:
- Filters discovered domains to identify live and responsive web servers using
httpx, also extracting associated IP addresses for further scanning.
URL Discovery & JavaScript Analysis:
- Discovers URLs from various sources using
waybackurls,katana,waymore, andwaybackrobots. - Integrates
jslinks: Automatically extracts JavaScript files and analyzes them for potential endpoints. - Analyzes JavaScript files for sensitive information (e.g., API keys, credentials) using
SecretFinder. - Integrates
crawler: Optionally performs dynamic, interactive web crawling to discover more endpoints and requests.
Web Content Discovery:
- Performs directory and file brute-forcing on live web servers using
gobusterto uncover hidden paths and resources.
Port & Service Enumeration:
- Conducts fast port scanning with
naabuand performs detailed service version detection and basic vulnerability scanning withnmapon identified open ports.
Parameter Discovery:
- Identifies potential URL parameters using
paramspiderto aid in further testing.
Visual Reconnaissance:
- Automatically takes screenshots of all live websites using
httpxfor quick visual assessment.
Vulnerability Scanning:
- Performs initial vulnerability scanning using
nucleiwith community-contributed templates.
Organized Output:
- Saves all results in a structured directory for each domain, with sorted and deduplicated files for easy analysis.
Installation
Prerequisites
- Linux-based system (e.g., Ubuntu, Debian, Kali Linux).
- Python 3 and
pipinstalled. - Go (Golang) installed for Go-based tools (version 1.16+ recommended).
- Basic system packages:
git,curl,wget,unzip,dnsutils. - Browser Drivers for
crawler.py: If using the--enable-crawleroption, you will need Chrome or Firefox and their respective WebDriver installed and in your PATH.chrome-driverorgeckodriver.
Installation Steps
-
Clone the AutoRecon repository:
git clone [https://github.com/00xmora/autorecon.git](https://github.com/00xmora/autorecon.git) cd autorecon -
Install core dependencies: Most reconnaissance tools (
amass,subfinder,httpx,nuclei, etc.) are installed via the providedinstall.shscript. Run this first:chmod +x install.sh ./install.shThis script handles the installation of common tools and sets up basic paths.
-
Run
autorecon.py: Theautorecon.pyscript itself will handle the installation ofjslinksandcrawler(if--enable-crawleris used) on its first run if they are not detected in your system's PATH. It will clone their respective repositories from GitHub, install Python dependencies, and create necessary symlinks in/usr/local/bin/.
Docker Usage
You can run AutoRecon using Docker to ensure a consistent environment without manually installing all dependencies.
Important Note for crawler.py: If you intend to use the --enable-crawler option, crawler.py will attempt to launch a browser for manual login. This means the Docker container needs access to a display server (X server) if not running in --crawler-headless mode. For most use cases within Docker, --crawler-headless is recommended. The Dockerfile below includes necessary browser dependencies.
1. Build the Docker Image
Navigate to the directory containing your autorecon.py (and the Dockerfile you've created from the snippet above) and build the image:
docker build -t autorecon .
2. Run the Docker Container
When running the Docker container, you'll need to mount a local directory to store the reconnaissance results. autorecon.py no longer uses a config.ini file, as API key integration has been removed, and jslinks and crawler are self-installed by the script inside the container.
docker run -it --rm \
-v "$(pwd)/my_recon_data:/app/output" \
autorecon -n my_project -d example.com --all-recon --enable-crawler --crawler-headless
-it: Runs the container in interactive mode and allocates a pseudo-TTY.--rm: Automatically removes the container when it exits.-v "$(pwd)/my_recon_data:/app/output": Mounts a local directory (e.g.,my_recon_datain your current working directory) to/app/outputinside the container. All output files will be saved here, allowing you to access them after the container finishes.- Note: Replace
my_recon_datawith your desired local directory name.autoreconwill create project directories inside this mounted volume.
- Note: Replace
autorecon -n my_project -d example.com --all-recon --enable-crawler --crawler-headless: Theautoreconcommand with your desired arguments.- If you enable
--enable-crawler, it's highly recommended to also use--crawler-headlessfor non-interactive Docker environments.
- If you enable
Example Docker Run:
To run a full reconnaissance on target.com with dynamic crawling in headless mode and save results to a local recon_output folder:
mkdir recon_output # Create the local directory first
docker run -it --rm \
-v "$(pwd)/recon_output:/app/output" \
autorecon -n target_scan -d target.com --all-recon --enable-crawler --crawler-headless
Important Post-Installation Steps:
- Restart your terminal or run
source ~/.bashrc(or~/.profile) to ensure your PATH is updated and newly installed tools are found.
Usage
Run autorecon.py with a project name and one or more domains. You can enable specific reconnaissance phases using the provided options, or run --all-recon for a comprehensive scan.
./autorecon.py -n MyProject -d example.com example2.com
Options
-n, --project-name <name>: (Required) The name of the project directory where results will be saved.-d, --domains <domain1> [domain2 ...]: One or more target domains to perform reconnaissance on.-w, --wordlist <path>: Path to a custom wordlist for DNS enumeration (dnsrecon) and FFUF. Overrides the default Seclists wordlist.--crawl: Enable URL discovery and crawling (waybackurls, katana, waymore, jslinks).-active: Enable active subdomain enumeration (dnsrecon and ffuf).-r, --recursive: Enable recursive JS endpoint extraction (used with--crawl).-H, --header <"Header-Name: value">: Custom headers for HTTP requests (e.g., for JS crawling or web content discovery). Can be specified multiple times.-t, --threads <num>: Number of threads for concurrent execution of tools (default: 10).--all-recon: Enable all reconnaissance phases: active enumeration, URL crawling, port scanning, web content discovery, parameter discovery, screenshots, JS analysis, and vulnerability scanning.--ports-scan: Enable port and service enumeration withnaabuandnmap.--web-content-discovery: Enable web content discovery (directory brute-forcing withgobuster).--params-discovery: Enable URL parameter discovery withparamspider.--screenshots: Enable taking screenshots of live websites withhttpx.--js-analysis: Enable analysis of JavaScript files for secrets and additional endpoints.--vuln-scan: Enable basic vulnerability scanning withnuclei.
crawler.py Specific Options (for dynamic crawling)
--enable-crawler: Enable dynamic crawling withcrawler.py. Note: This requires manual login interaction in the opened browser window.--crawler-max-pages <num>: Maximum number of pages forcrawler.pyto crawl (default: 10).--crawler-output-format <format>: Output format forcrawler.py(json,txt,csv). AutoRecon primarily processes JSON internally.--crawler-headless: Runcrawler.pyin headless browser mode (no GUI).
Output
Results are saved in a structured directory for each domain within your specified project name:
MyProject/
├── [example.com/](https://example.com/)
│ ├── domains.txt # All discovered passive subdomains
│ ├── domain.live # Live/responsive subdomains
│ ├── ips.txt # Unique IPs resolved from live domains
│ ├── urls.txt # All discovered URLs (from crawling and JS analysis)
│ ├── js_endpoints.txt # URLs of JavaScript files found
│ ├── js_secrets.txt # Discovered secrets/sensitive data from JS files
│ ├── discovered_paths.txt # Paths found via web content discovery
│ ├── naabu_open_ports.txt # Open ports identified by naabu
│ ├── nmap_detailed_scan.xml # Detailed Nmap scan results (XML)
│ ├── discovered_parameters.txt # Discovered URL parameters
│ ├── nuclei_results.txt # Vulnerability scan results from Nuclei
│ └── screenshots/ # Directory containing website screenshots
└── [example2.com/](https://example2.com/)
└── ... (similar structure)
Example Usage
-
Run a full reconnaissance scan on
example.comandtest.comwith custom headers:./autorecon.py -n MyFullScan -d example.com test.com --all-recon -H "User-Agent: MyReconTool/1.0" -
**Run passive subdomain enumeration and URL crawling with recursive JS analysis and a custom wordlist
Related Skills
node-connect
343.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
90.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
343.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
343.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
