Gibson
Network monitoring tool that maps process-to-network connections, identifies cloud providers, and detects beaconing activity. Zero-flag agent binary for deployment, aggregation server, offline ASN lookup.
Install / Use
/learn @HackingLZ/GibsonREADME
Gibson
Network monitoring tool that maps process-to-network connections, identifies cloud providers, and generates firewall rules. Lightweight agent for collection, server for aggregation, parser for analysis.
Screenshots
Cross-machine overview — OS version comparison, shared IP detection, beaconing candidates across all hosts:

Per-host detail — critical alerts (EOL OS, nc.exe beaconing to unknown IP):

Per-host detail — warning (beacon candidate flagged):

Per-host detail — clean (no anomalies):

Features
Collector (Agent)
- 🔒 Secure: Optional AES-256-GCM encryption
- 🗜️ Efficient: Optional gzip compression
- 🌐 Cloud Upload: HTTP/HTTPS upload with API key support
- 📊 Real-time: Streaming data collection
- 🔍 DNS Resolution: Optional reverse DNS lookups
- 💾 Flexible Storage: JSONL format for easy parsing
Parser (Analyzer)
- ☁️ Cloud Detection: Identifies AWS, Azure, GCP, Cloudflare, etc.
- 🔥 Firewall Rules: Auto-generates iptables/Windows rules
- 📈 Risk Scoring: Identifies suspicious processes
- 🗄️ Database Export: SQL export for further analysis
- 📊 Rich Reports: JSON summaries with detailed insights
Quick Start
Build
cargo build --release
Basic Collection (5 minutes)
# Simple collection
cargo run --release -- collect --duration-seconds 300
# With DNS lookups
cargo run --release -- collect --duration-seconds 300 --enable-dns
# With compression and encryption
cargo run --release -- collect \
--duration-seconds 300 \
--compress \
--encrypt-key "your-secret-password"
Parse Collected Data
# Generate all reports
cargo run --release -- parse \
--input connections.jsonl \
--process-summary processes.json \
--cloud-analysis cloud.json \
--firewall-rules-iptables firewall.sh \
--database-export network.sql
# Offline ownership lookup with local ASN DB (no network calls)
cargo run --release -- parse \
--input connections.jsonl \
--cloud-analysis cloud.json \
--asn-db ip2asn-v4.tsv
# Live ARIN lookup with persistent cache (re-run skips already-queried IPs)
cargo run --release -- parse \
--input connections.jsonl \
--cloud-analysis cloud.json \
--arin-lookup \
--arin-cache arin_cache.json
All-in-One Monitor Mode
# Quick 5-minute analysis
cargo run --release -- monitor \
--duration-seconds 300 \
--output-dir ./analysis \
--full-analysis
Agent Build
The agent binary is a minimal, zero-flag deployment target. All configuration is burned into the binary at compile time via environment variables — drop it on a target and run it with no arguments.
Build
AGENT_SERVER="http://10.0.1.5:8080/upload" \
AGENT_KEY="labkey123" \
AGENT_INTERVAL="5" \
AGENT_BATCH="200" \
AGENT_DURATION="0" \
AGENT_DNS="false" \
AGENT_ENCRYPT_KEY="mysecretpassword" \
cargo build --release --bin agent
The resulting binary at target/release/agent has no external dependencies and requires no flags:
./agent
Environment Variables
| Variable | Default | Description |
|---|---|---|
| AGENT_SERVER | http://localhost:8080/upload | Upload endpoint URL |
| AGENT_KEY | (none) | X-API-Key header value |
| AGENT_INTERVAL | 5 | Socket poll interval in seconds |
| AGENT_BATCH | 200 | Records per upload batch |
| AGENT_DURATION | 0 | Run duration in seconds (0 = run forever) |
| AGENT_DNS | false | Resolve IPs to hostnames |
| AGENT_ESTABLISHED | true | ESTABLISHED connections only |
| AGENT_LOCAL_COPY | false | Keep a local .jsonl copy alongside uploads |
| AGENT_COMPRESS | false | Gzip compress before upload |
| AGENT_ENCRYPT_KEY | (none) | AES-256-GCM encrypt payload (password or 64-char hex key) |
| AGENT_UA | (reqwest default) | HTTP User-Agent header |
Example: Encrypted, Long-term Agent
AGENT_SERVER="https://collector.internal/upload" \
AGENT_KEY="prod-api-key" \
AGENT_DURATION="0" \
AGENT_INTERVAL="30" \
AGENT_COMPRESS="true" \
AGENT_ENCRYPT_KEY="$(cat /etc/gibson/key)" \
cargo build --release --bin agent
Advanced Usage
Secure Remote Collection
1. Encrypted Collection with Upload
cargo run --release -- collect \
--duration-seconds 3600 \
--interval-seconds 10 \
--compress \
--encrypt-key "your-32-char-hex-key-or-password" \
--upload-url "https://your-server.com/api/upload" \
--api-key "your-api-key" \
--batch-size 50 \
--delete-after-upload
2. Long-term Monitoring (24 hours)
cargo run --release -- collect \
--duration-seconds 86400 \
--interval-seconds 30 \
--output connections_daily.jsonl \
--enable-dns \
--compress
IP Ownership Lookup
The parser supports two mutually exclusive paths for identifying who owns unmatched IPs:
| Method | Flag | Speed | Network | Best for |
|---|---|---|---|---|
| Local ASN DB | --asn-db | Instant | None | Repeated analysis, air-gapped environments |
| Live ARIN RDAP | --arin-lookup | Slow (per-IP) | Yes | One-off lookups, no local DB available |
Download the ip2asn database (refresh weekly):
curl -O https://iptoasn.com/data/ip2asn-v4.tsv.gz && gunzip ip2asn-v4.tsv.gz
When --asn-db is provided, --arin-lookup is ignored. Use --arin-cache to persist ARIN results to disk so re-runs skip already-queried IPs.
Cloud Provider Analysis
# Parse with cloud detection focus
cargo run --release -- parse \
--input connections.jsonl \
--cloud-analysis cloud_report.json \
--min-connections 5 \
--whitelist-processes "chrome,firefox,safari,edge"
Web Server Setup for Data Collection
Option 1: Simple Python Flask Server
Create collector_server.py:
from flask import Flask, request, jsonify
import os
import json
import base64
from datetime import datetime
from Crypto.Cipher import AES
import gzip
app = Flask(__name__)
# Configuration
UPLOAD_DIR = "./collected_data"
API_KEY = "your-secure-api-key"
ENCRYPTION_KEY = bytes.fromhex("your-32-byte-hex-key") # Optional
os.makedirs(UPLOAD_DIR, exist_ok=True)
def decrypt_data(encrypted_data, key):
"""Decrypt AES-256-GCM encrypted data"""
decoded = base64.b64decode(encrypted_data)
nonce = decoded[:12]
ciphertext = decoded[12:]
cipher = AES.new(key, AES.MODE_GCM, nonce=nonce)
plaintext = cipher.decrypt_and_verify(ciphertext[:-16], ciphertext[-16:])
return plaintext
@app.route('/api/upload', methods=['POST'])
def upload():
# Verify API key
if request.headers.get('X-API-Key') != API_KEY:
return jsonify({"error": "Invalid API key"}), 401
try:
data = request.get_data()
# If data is base64 encoded (encrypted)
if data.startswith(b'eyJ'): # JSON starts with {"
# Not encrypted, parse directly
batch = json.loads(data)
else:
# Encrypted data
decrypted = decrypt_data(data, ENCRYPTION_KEY)
batch = json.loads(decrypted)
# Save to file
hostname = batch.get('hostname', 'unknown')
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
filename = f"{UPLOAD_DIR}/{hostname}_{timestamp}.json"
with open(filename, 'w') as f:
json.dump(batch, f)
return jsonify({"status": "success", "file": filename}), 200
except Exception as e:
return jsonify({"error": str(e)}), 500
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000, ssl_context='adhoc') # Use proper SSL in production
Run with:
pip install flask pycryptodome
python collector_server.py
Option 2: Nginx with Basic Upload
Create /etc/nginx/sites-available/collector:
server {
listen 443 ssl;
server_name collector.yourcompany.com;
ssl_certificate /etc/ssl/certs/your-cert.pem;
ssl_certificate_key /etc/ssl/private/your-key.pem;
client_max_body_size 100M;
location /upload {
# API key validation
if ($http_x_api_key != "your-secure-api-key") {
return 403;
}
# Save uploaded files
client_body_in_file_only on;
client_body_temp_path /var/uploads/;
# Pass to processing script
proxy_pass http://localhost:8080;
proxy_set_header X-File $request_body_file;
}
}
Option 3: AWS Lambda Function
// index.js for AWS Lambda
const AWS = require('aws-sdk');
const crypto = require('crypto');
const s3 = new AWS.S3();
const BUCKET_NAME = 'your-network-data-bucket';
const API_KEY = process.env.API_KEY;
const ENCRYPTION_KEY = Buffer.from(process.env.ENCRYPTION_KEY, 'hex');
exports.handler = async (event) => {
// Verify API key
if (event.headers['X-API-Key'] !== API_KEY) {
return {
statusCode: 401,
body: JSON.stringify({ error: 'Invalid API key' })
};
}
try {
let data = event.body;
// Decrypt if needed
if (!data.startsWith('{')) {
// Encrypted data
const encrypted = Buffer.from(data, 'base64');
const nonce = encrypted.slice(0, 12);
const ciphertext = encrypted.slice(12);
const decipher = crypto.createDecipheriv('aes-256-gcm', ENCRYPTION_KEY, nonce);
const decrypted = Buffer.concat([
decipher.update(ciphertext.slice(0, -16)),
decipher.final()
]);
data = decrypted.toString();
}
const batch = JSON.parse(data);
const key = `${batch.hostname}/${Date.now()}_${batch.batch_id}.json`;
await s3
Related Skills
healthcheck
337.4kHost security hardening and risk-tolerance configuration for OpenClaw deployments
himalaya
337.4kCLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language).
tmux
337.4kRemote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
prose
337.4kOpenProse VM skill pack. Activate on any `prose` command, .prose files, or OpenProse mentions; orchestrates multi-agent workflows.
