RuView
π RuView: WiFi DensePose turns commodity WiFi signals into real-time human pose estimation, vital sign monitoring, and presence detection — all without a single pixel of video.
Install / Use
/learn @ruvnet/RuViewREADME
π RuView
<p align="center"> <a href="https://ruvnet.github.io/RuView/"> <img src="assets/ruview-small-gemini.jpg" alt="RuView - WiFi DensePose" width="100%"> </a> </p>See through walls with WiFi + Ai
Perceive the world through signals. No cameras. No wearables. No Internet. Just physics.
π RuView is an edge AI perception system that learns directly from the environment around it.
Instead of relying on cameras or cloud models, it observes whatever signals exist in a space such as WiFi, radio waves across the spectrum, motion patterns, vibration, sound, or other sensory inputs and builds an understanding of what is happening locally.
Built on top of RuVector, the project became widely known for its implementation of WiFi DensePose — a sensing technique first explored in academic research such as Carnegie Mellon University's DensePose From WiFi work. That research demonstrated that WiFi signals can be used to reconstruct human pose.
RuView extends that concept into a practical edge system. By analyzing Channel State Information (CSI) disturbances caused by human movement, RuView reconstructs body position, breathing rate, heart rate, and presence in real time using physics-based signal processing and machine learning.
Unlike research systems that rely on synchronized cameras for training, RuView is designed to operate entirely from radio signals and self-learned embeddings at the edge.
The system runs entirely on inexpensive hardware such as an ESP32 sensor mesh (as low as ~$1 per node). Small programmable edge modules analyze signals locally and learn the RF signature of a room over time, allowing the system to separate the environment from the activity happening inside it.
Because RuView learns in proximity to the signals it observes, it improves as it operates. Each deployment develops a local model of its surroundings and continuously adapts without requiring cameras, labeled data, or cloud infrastructure.
In practice this means ordinary environments gain a new kind of spatial awareness. Rooms, buildings, and devices begin to sense presence, movement, and vital activity using the signals that already fill the space.
Built for low-power edge applications
Edge modules are small programs that run directly on the ESP32 sensor — no internet needed, no cloud fees, instant response.
| What | How | Speed | |------|-----|-------| | Pose estimation | CSI subcarrier amplitude/phase → DensePose UV maps | 54K fps (Rust) | | Breathing detection | Bandpass 0.1-0.5 Hz → FFT peak | 6-30 BPM | | Heart rate | Bandpass 0.8-2.0 Hz → FFT peak | 40-120 BPM | | Presence sensing | RSSI variance + motion band power | < 1ms latency | | Through-wall | Fresnel zone geometry + multipath modeling | Up to 5m depth |
# 30 seconds to live sensing — no toolchain required
docker pull ruvnet/wifi-densepose:latest
docker run -p 3000:3000 ruvnet/wifi-densepose:latest
# Open http://localhost:3000
[!NOTE] CSI-capable hardware required. Pose estimation, vital signs, and through-wall sensing rely on Channel State Information (CSI) — per-subcarrier amplitude and phase data that standard consumer WiFi does not expose. You need CSI-capable hardware (ESP32-S3 or a research NIC) for full functionality. Consumer WiFi laptops can only provide RSSI-based presence detection, which is significantly less capable.
Hardware options for live CSI capture:
| Option | Hardware | Cost | Full CSI | Capabilities | |--------|----------|------|----------|-------------| | ESP32 Mesh (recommended) | 3-6x ESP32-S3 + WiFi router | ~$54 | Yes | Pose, breathing, heartbeat, motion, presence | | Research NIC | Intel 5300 / Atheros AR9580 | ~$50-100 | Yes | Full CSI with 3x3 MIMO | | Any WiFi | Windows, macOS, or Linux laptop | $0 | No | RSSI-only: coarse presence and motion |
No hardware? Verify the signal processing pipeline with the deterministic reference signal:
python v1/data/proof/verify.py
📖 Documentation
| Document | Description | |----------|-------------| | User Guide | Step-by-step guide: installation, first run, API usage, hardware setup, training | | Build Guide | Building from source (Rust and Python) | | Architecture Decisions | 62 ADRs — why each technical choice was made, organized by domain (hardware, signal processing, ML, platform, infrastructure) | | Domain Models | 7 DDD models (RuvSense, Signal Processing, Training Pipeline, Hardware Platform, Sensing Server, WiFi-Mat, CHCI) — bounded contexts, aggregates, domain events, and ubiquitous language | | Desktop App | WIP — Tauri v2 desktop app for node management, OTA updates, WASM deployment, and mesh visualization | | Medical Examples | Contactless blood pressure, heart rate, breathing rate via 60 GHz mmWave radar — $15 hardware, no wearable |
<a href="https://ruvnet.github.io/RuView/"> <img src="assets/v2-screen.png" alt="WiFi DensePose — Live pose detection with setup guide" width="800"> </a> <br> <em>Real-time pose skeleton from WiFi CSI signals — no cameras, no wearables</em> <br><br> <a href="https://ruvnet.github.io/RuView/"><strong>▶ Live Observatory Demo</strong></a> | <a href="https://ruvnet.github.io/RuView/pose-fusion.html"><strong>▶ Dual-Modal Pose Fusion Demo</strong></a>
The server is optional for visualization and aggregation — the ESP32 runs independently for presence detection, vital signs, and fall alerts.
Live ESP32 pipeline: Connect an ESP32-S3 node → run the sensing server → open the pose fusion demo for real-time dual-modal pose estimation (webcam + WiFi CSI). See ADR-059.
🚀 Key Features
Sensing
See people, breathing, and heartbeats through walls — using only WiFi signals already in the room.
| | Feature | What It Means | |---|---------|---------------| | 🔒 | Privacy-First | Tracks human pose using only WiFi signals — no cameras, no video, no images stored | | 💓 | Vital Signs | Detects breathing rate (6-30 breaths/min) and heart rate (40-120 bpm) without any wearable | | 👥 | Multi-Person | Tracks multiple people simultaneously, each with independent pose and vitals — no hard software limit (physics: ~3-5 per AP with 56 subcarriers, more with multi-AP) | | 🧱 | Through-Wall | WiFi passes through walls, furniture, and debris — works where cameras cannot | | 🚑 | Disaster Response | Detects trapped survivors through rubble and classifies injury severity (START triage) | | 📡 | Multistatic Mesh | 4-6 low-cost sensor nodes work together, combining 12+ overlapping signal paths for full 360-degree room coverage with sub-inch accuracy and no person mix-ups (ADR-029) | | 🌐 | Persistent Field Model | The system learns the RF signature of each room — then subtracts the room to isolate human motion, detect drift over days, predict intent before movement starts, and flag spoofing attempts (ADR-030) |
Intelligence
The system learns on its own and gets smarter over time — no hand-tuning, no labeled data required.
| | Feature | What It Means | |---|---------|---------------| | 🧠 | Self-Learning | Teaches itself from raw WiFi data — no labeled training sets, no cameras needed to bootstrap (ADR-024) | | 🎯 | AI Signal Processing | Attention networks, graph algorithms, and smart compression replace hand-tuned thresholds — adapts to each room automatically (RuVector) | | 🌍 | Works Everywhere | Train once, deploy in any room — adversarial domain generalization strips environment bias so models transfer across rooms, buildings, and hardware (ADR-027) | | 👁️ | Cross-Viewpoint Fusion | AI combines what each sensor sees from its own angle — fills in blind spots and depth ambiguity that no single viewpoint can resolve on its own (ADR-031) | | 🔮 | Signal-Line Protocol | A 6-stage processing pipeline transforms raw WiFi signals into structured body representations — from signal cleanup through graph-based spatial reasoning to final pose output (ADR-033) | | 🔒 | QUIC Mesh Security | All sensor-to-sensor communication is encrypted end-to-end with tamper detection, replay protection, and seamless reconnection if a node moves or drops offline (ADR-032) | | 🎯 | Adaptive Classifier | Rec
Related Skills
tmux
328.4kRemote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
blogwatcher
328.4kMonitor blogs and RSS/Atom feeds for updates using the blogwatcher CLI.
prd
Raito Bitcoin ZK client web portal.
product
Cloud-agnostic Kubernetes infrastructure with Terraform & Helm for homelabs, edge, and production clusters.
