Netz
Discover internet-wide misconfigurations while drinking coffee
Install / Use
/learn @SpectralOps/NetzREADME
netz :globe_with_meridians::eagle:
The purpose of this project is to discover an internet-wide misconfiguration of network components like web-servers/databases/cache-services and more.
The basic use-case for such misconfiguration - a service that is publicly exposed to the world without a credentials ¯\_(ツ)_/¯
You probably familiar with tools like Shodan, Censys, ZoomEye to query such wide internet components,
but here we are going to do it in a fun way :: by hands :D
The tools we are going to use are masscan, and zgrab2 from ZMap project. For the first phase of port scanning, we will use masscan, then for the second phase, we will run zgrab2 to check applicative access for those ports.
ZMap is also internet-wide scanner, so why masscan and not ZMap..? because we want to go wild and use kernel module PF_RING ZC (Zero Copy) to get blazing fast packets-per-second to scan the entire internet in minutes, and ZMap basically does support it in the past, but now ZMap doesn't compatible with the latest PF_RING ZC (Zero Copy).
Note that PF_RING ZC (Zero Copy) requires a license per MAC/NIC (you can run 5 minutes in demo before it will kill the flow), and you need a special NIC from Intel (don't worry, the public cloud has such) so you can go without this module, and pay on time to wait for results.
There are few options to run this project:
- Use netz cloud runner tool - this tool automate the full pipeline, including infrastructure on top of AWS
- Run by yourself using docker
- For PF_RING ZC (Zero Copy) run by yourself the infrastructure and using pf_ring setup
If you want to read more about it, you can found it here: Scan the whole internet while drinking coffee
TL;DR
In discover.sh you will find a test for Elasticsearch.
The flow is:
- run masscan on the entire internet for port 9200 (Elasticsearch port)
- pipe ip list from step 1 into zgrab2 (you can change with
ZGRAB2_ENDPOINTenvironment variable for any Elasticsearch API Endpoint, for instance:/_cat/indices - extract with jq just those ip's that return HTTP 200 OK and include
lucene_version
This flow result is ips' that has internet access to Elasticsearch without credentials.
This test flow demonstrates Elasticsearch scan. You can run such scans on any port (service port) you wish and on any supported protocol by zgrab2 modules. Environment variables can modify more control:
PORT_TO_SCAN
SUBNET_TO_SCAN
ZGRAB2_ENDPOINT
In case you wish to add a missing protocol, you can extend zgrab2 by adding new protocols
We will go through a setup to be faster and faster (decreasing the time to wait).
Let's Go :rocket:
1. netz cloud runner tool
This is the easiest option as it automates everything in AWS on top of Elastic Container Service (ECS).
What it does:
- Create IAM role for the pipeline
- Put IAM Policy
- Create Instance Profile
- Associate IAM role to Instance Profile
- Create Temporary ECS Cluster
- Create EC2 instance (instance type based on user input
--instance-type) - Create a number of Network Interfaces (number based on user input
--number-of-nic) - Create Public Elastic IP (number based on user input
--number-of-nic) - Associate Elastic IP with Network Interface (for each user input
--number-of-nic) - Run ECS task with the scanning pipeline
- Create CloudWatch log group and stream the pipeline docker output into the user terminal
- Destroying all AWS resources
- Done
How to run
Configure AWS credentials, you can do it by ~/.aws/credentials,
or by settings environment variables:
AWS_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
$ netz
NAME:
netz - netz cloud runner
USAGE:
netz [options]
COMMANDS:
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--debug Show debugging information (default: false)
--file value Task definition file in JSON or YAML
--cluster value ECS cluster name (default: "netz")
--log-group value Cloudwatch Log Group Name to write logs to (default: "netz-runner")
--security-group value Security groups to launch task. Can be specified multiple times
--subnet value Subnet to launch task.
--region value AWS Region
--number-of-nic value Number of network interfaces to create and attach to instance. (default: 0)
--instance-type value Instance type.
--instance-key-name value Instance key name to for ssh.
--role-name value Role name for netz. (default: "netzRole")
--role-policy-name value Role policy name for netz. (default: "netzPolicy")
--instance-profile-name value Instance profile name to attach to instance. (default: "netzInstanceProfile")
--task-timeout value Task timeout (in minutes), stop everything after that. (default: 120)
--skip-destroy Skip destroy of cloud resources when done. (default: false)
--help, -h show help (default: false)
Required flags "file, security-group, subnet, region, number-of-nic, instance-type, instance-key-name"
Example
$ netz --file taskdefinition.json --security-group sg-XXXXXXXXXXXXXXXXXX --subnet subnet-XXXXXXXX --region us-west-1 --debug --number-of-nic 5 --instance-type c4.8xlarge --instance-key-name XXXXXXXXX
:warning:
Because masscan meltdown the network, SSH mostly will not be available, also CloudWatch logs will be deferred, so the tailed logs in user terminal will take some time.
Note that taskdefinition.json is related to running with the automated way with AWS ECS.
In that file, you will be able to change the subnet & port to scan, also the application endpoint.
In this file, you can also control the CPU & RAM you allocate to the task. This test assumed c4.8xlarge, so the config is 60 x cpu and 36 GB RAM.
Result
On AWS with c4.8xlarge with 6 x NIC ~ 2.9M ~ 3.5M PPS => took 25 minutes
2. Run by yourself using docker
2.1 Basic
Run with Docker on basic computer/NIC
Steps
$ git clone https://github.com/SpectralOps/netz
$ cd netz/docker/
$ docker build -t netz .
$ docker run -e PORT_TO_SCAN='80' -e SUBNET_TO_SCAN='216.239.38.21/32' -e ZGRAB2_ENDPOINT='/' -e TASK_DEFINITION='docker' -v /tmp/:/opt/out --network=host -it netz
:warning:
The time to scrape the entire internet with simple hardware and simple internet backbone could take days
3. Faster :zap:
Run with Docker on Cloud with one 10gbps NIC
Run instance with one 10gbps NIC (e.g. in AWS c4.8xlarge [already configured with])
Steps are the same as 2.1 Basic.
Result
On AWS with c4.8xlarge ~ 700k ~ 950k PPS => took 2.5 hours.
4. Faster++ :zap::dizzy:
Run with Docker on Cloud with multiple 10gbps NIC (e.g. in AWS c4.8xlarge 10gbps NIC )
- Run in AWS c4.8xlarge Ubuntu 18.04 and connect multiple NIC (ENI's)
- For each NIC you need to configure the OS to see those new NIC's.
Edit the netplan file:
vim /etc/netplan/50-cloud-init.yaml
Now it has one NIC:
network:
version: 2
ethernets:
ens3:
dhcp4: true
match:
macaddress: 06:XX:XX:XX:XX:XX
set-name: ens3
You need to add the second, the third and so on...
network:
version: 2
ethernets:
ens3:
dhcp4: true
match:
macaddress: 03:XX:XX:XX:XX:XX
set-name: ens3
ens4:
dhcp4: true
match:
macaddress: 04:XX:XX:XX:XX:XX
set-name: ens4
ens5:
dhcp4: true
match:
macaddress: 05:XX:XX:XX:XX:XX
set-name: ens5
ens6:
dhcp4: true
match:
macaddress: 06:XX:XX:XX:XX:XX
set-name: ens6
ens7:
dhcp4: true
match:
macadd
Related Skills
node-connect
343.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
xurl
343.3kA CLI tool for making authenticated requests to the X (Twitter) API. Use this skill when you need to post tweets, reply, quote, search, read posts, manage followers, send DMs, upload media, or interact with any X API v2 endpoint.
frontend-design
92.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
343.3kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
