Replayer
HTTP request replay and comparison tool in Rust. Replay real traffic, compare multiple environments, detect broken endpoints, generate HTML/JSON reports, and analyze latency
Install / Use
/learn @kx0101/ReplayerREADME
Replayer
An HTTP request replay and comparison tool written in Rust. Perfect for testing API changes, comparing environments, load testing, validating migrations, and generating detailed reports
Features
Core
- Replay HTTP requests from JSON log files
- Multi-target support - test multiple environments simultaneously
- Concurrent execution with configurable limits
- Smart filtering by method, path, and limits
- Ignore rules for skipping noisy or irrelevant fields during diffing
- Regression rules: to automatically fail when behavioral or performance regressions are detected
Performance & Load
- Rate limiting - control requests per second
- Configurable timeouts and delays
- Real-time progress tracking with ETA
- Detailed latency statistics (p50, p90, p95, p99, min, max, avg)
Response Comparison
- Automatic diff detection between targets
- Status code mismatch reporting
- Response body comparison
- Latency comparison across targets
- Per-target statistics breakdown
- Ignore fields during comparison
Authentication & Headers
- Bearer token authentication
- Custom API headers (repeatable)
- Supports multiple headers simultaneously
Output
- Colorized console output for easy reading
- JSON output for programmatic use and CI/CD
- HTML reports with executive summary, latency charts, per-target breakdown, and difference highlighting
- Summary-only mode for quick overview
Logs
- Nginx log conversion to JSON Lines format (combined/common)
- Supports filtering and replay directly from raw logs
- Fully replayable: captured logs can be replayed or compared after the fact
Exit Codes
Replayer returns specific exit codes to allow CI/CD pipelines and scripts to react programmatically:
| Exit Code | Meaning |
| --------- | ------------------------------------------------------------------ |
| 0 | Run completed successfully, no differences or errors |
| 1 | Differences detected between targets (used with --compare) |
| 2 | One or more regression rules were violated |
| 3 | Invalid arguments or command-line usage |
| 4 | Runtime error occurred (network, file I/O, or unexpected failure) |
🚀 Quick Start
Installation
# Clone the repository
git clone <repo-url>
cd replayer
# Build all components
make build
Usage Guide
Basic Replay
Replay requests against a single target:
./replayer --input-file test_logs.json --concurrency 5 localhost:8080
Compare Staging vs Production
The killer feature - compare two environments side-by-side:
./replayer \
--input-file prod_logs.json \
--compare \
--concurrency 10 \
staging.example.com \
production.example.com
Load Testing
Simulate realistic load patterns:
./replayer \
--input-file logs.json \
--rate-limit 1000 \
--concurrency 50 \
--timeout 10000 \
localhost:8080
Authentication & Custom Headers
Provide auth token or custom headers:
# Bearer token
./replayer --input-file logs.json --auth "Bearer token123" api.example.com
# Custom headers
./replayer --input-file logs.json --header "X-API-Key: abc" --header "X-Env: staging" api.example.com
HTML Report Generation
# Single target
./replayer --input-file logs.json --html-report report.html localhost:8080
# Comparison mode
./replayer --input-file logs.json --compare --html-report comparison_report.html staging.api production.api
Nginx Log Parsing
# Convert nginx logs to JSON Lines
./replayer --input-file /var/log/nginx/access.log --parse-nginx traffic.json --nginx-format combined
# Replay converted logs
./replayer --input-file traffic.json --concurrency 10 staging.api.com
Filter Specific Requests
Test only certain endpoints:
# Only replay POST requests to /checkout
./replayer \
--input-file test_logs.json \
--filter-method POST \
--filter-path /checkout \
--limit 100 \
localhost:8080
Ignore Rules
Ignore specific JSON fields when comparing responses
| Type | Example |
|------|------|
| Exact field | --ignore status.updated_at |
| Wildcard | --ignore '*.timestamp' |
| Multiple fields | --ignore x --ignore y --ignore z|
# Ignore timestamps, request IDs, metadata
./replayer \
--input-file logs.json \
--compare \
--ignore "*.timestamp" \
--ignore "request_id" \
--ignore "metadata.*" \
staging.api prod.api
# Ignore an entire object subtree
--ignore "debug_info"
JSON Output for Automation
Perfect for CI/CD pipelines:
./replayer \
--input-file test_logs.json \
--output-json \
--compare \
staging.api \
production.api > results.json
cat results.json | jq '.summary.succeeded'
Live Capture Mode
Capture requests in real-time from a running service or proxy and replay/compare them on the fly
# HTTP capture
./replayer --capture \
--listen :8080 \
--upstream http://staging.api \
--output traffic.json \
--stream
# HTTPS capture
./replayer --capture \
--listen :8080 \
--upstream https://staging.api \
--output traffic.json \
--stream \
--tls-cert proxy.crt \
--tls-key proxy.key
# Replay captured traffic
./replayer --input-file traffic.json staging.api
# Compare captured traffic between two environments
./replayer --input-file traffic.json --compare staging.api production.api
When you finish capturing you may use the generated traffic.json file to replay or compare as usual
Regression Rules (Contract & Performance)
Declare regression rules via a yaml file. Replayer allows you to fail runs automatically when behavioral or performance regressions are detected
./replayer \
--input-file traffic.json \
--compare \
--rules rules.yaml \
staging.api \
production.api
If any rule is violated the run fails and violations are reported
rules.yaml example
rules:
status_mismatch:
max: 0
body_diff:
allowed: false
ignore:
- "*.timestamp"
- "request_id"
latency:
metric: p95
regression_percent: 20
endpoint_rules:
- path: /users
method: GET
status_mismatch:
max: 0
- path: /slow
latency:
metric: p95
regression_percent: 10
- Status: fails if response status differ
- Body: exact fields, or prefix/suffix wildcards
- Latency: you need a baseline for this (available metrics: min, max, avg, p50, p90, p95, p99)
Example:
./replayer \
--input-file traffic.json \
--compare \
--output-json \
staging.api production.api > baseline.json
./replayer \
--input-file traffic.json \
--compare \
--rules rules.yaml \
--baseline baseline.json \
staging.api production.api
Dry Run Mode
Preview what will be replayed without sending requests:
./replayer --input-file test_logs.json --dry-run
Cloud Upload
Upload replay results to Replayer Cloud for tracking, comparison, and team collaboration:
# set your API key (or use --cloud-api-key flag)
export REPLAYER_API_KEY="rp_your_api_key_here"
# upload results to cloud
./replayer \
--input-file traffic.json \
--compare \
--cloud \
--cloud-env production \
--cloud-label "version=v1.2.3" \
--cloud-label "branch=main" \
staging.api \
production.api
**What you get:**
- Historical tracking of all replay runs
- Web UI for viewing results and diffs
- Baseline comparison across runs
- Team collaboration with shared results
## Replayer Cloud
Replayer Cloud is a self-hosted SaaS platform for storing, comparing, and analyzing replay results with:
- Register, login, email verification
- Generate API keys for CLI access
- Browse and search all replay runs
- Set baselines and compare new runs
- Organize runs by environment
### Running Replayer Cloud
```bash
export DATABASE_URL="postgres://user:pass@localhost/replayer?sslmode=disable"
export SESSION_SECRET="your-32-character-secret-key-here"
export BASE_URL="http://localhost:8090"
cargo run -p replayer-cloud
Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| DATABASE_URL | Yes | PostgreSQL connection string |
| SESSION_SECRET | Yes | 32+ character secret for session encryption |
| BASE_URL | Yes | Public URL for email links |
| SECURE_COOKIES | No | Set to true in production (default: false) |
| SMTP_HOST | No | SMTP server host for email verification |
| SMTP_PORT | No | SMTP server port (default: 587) |
| SMTP_USER | No | SMTP username |
| SMTP_PASSWORD | No | SMTP password |
| SMTP_FROM | No | From address for emails |
API Endpoints
| Method | Path | Description |
|--------|------|-------------|
| POST | /api/v1/runs | Upload a new run |
| GET | /api/v1/runs | List runs (paginated) |
| GET | /api/v1/runs/{id} | Get run details |
| POST | /api/v1/runs/{id}/baseline | Set run as baseline |
| GET | /api/v1/compare/{id} | Compare run with baseline |
Command-Line Options
| Flag | Type | Default | Description |
|------|------|---------|-------------|
| --input-file | string | required | Path to the input log file |
| --concurrency | int | 1 | Number of concurrent requests |
| --timeout | int | 5000 | Request timeout in milliseconds |
| --delay | int | 0 | Delay between requests in milliseconds |
| --rate-limit | int | 0 | Maximum requests per second (0 = unlimited) |
| --limit | int | 0 | Limit number of requests to replay (0 = all) |
| --filter-method | string | "" | Filter by HTTP method (GET, POST, etc.) |
| --filter-path | string | "" | Filter by path substring |
| --compare | bool | false | Compare responses between targets |
| --output-json | bool | false | Output resu
Related Skills
gh-issues
343.3kFetch GitHub issues, spawn sub-agents to implement fixes and open PRs, then monitor and address PR review comments. Usage: /gh-issues [owner/repo] [--label bug] [--limit 5] [--milestone v1.0] [--assignee @me] [--fork user/repo] [--watch] [--interval 5] [--reviews-only] [--cron] [--dry-run] [--model glm-5] [--notify-channel -1002381931352]
himalaya
343.3kCLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language).
node-connect
343.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
oracle
343.3kBest practices for using the oracle CLI (prompt + file bundling, engines, sessions, and file attachment patterns).
