Uniface
UniFace: A Unified Face Analysis Library in Python built on ONNX Runtime
Install / Use
/learn @yakhyo/UnifaceREADME
UniFace is a lightweight, production-ready face analysis library built on ONNX Runtime. It provides high-performance face detection, recognition, landmark detection, face parsing, gaze estimation, and attribute analysis with hardware acceleration support across platforms.
Features
- Face Detection — RetinaFace, SCRFD, YOLOv5-Face, and YOLOv8-Face with 5-point landmarks
- Face Recognition — ArcFace, MobileFace, and SphereFace embeddings
- Face Tracking — Multi-object tracking with BYTETracker for persistent IDs across video frames
- Facial Landmarks — 106-point landmark localization module (separate from 5-point detector landmarks)
- Face Parsing — BiSeNet semantic segmentation (19 classes), XSeg face masking
- Gaze Estimation — Real-time gaze direction with MobileGaze
- Attribute Analysis — Age, gender, race (FairFace), and emotion
- Vector Indexing — FAISS-backed embedding store for fast multi-identity search
- Anti-Spoofing — Face liveness detection with MiniFASNet
- Face Anonymization — 5 blur methods for privacy protection
- Hardware Acceleration — ARM64 (Apple Silicon), CUDA (NVIDIA), CPU
Installation
Standard installation
pip install uniface
GPU support (CUDA)
pip install uniface[gpu]
From source (latest version)
git clone https://github.com/yakhyo/uniface.git
cd uniface && pip install -e .
FAISS vector indexing
pip install faiss-cpu # or faiss-gpu for CUDA
Optional dependencies
- Emotion model uses TorchScript and requires
torch:pip install torch(choose the correct build for your OS/CUDA) - YOLOv5-Face and YOLOv8-Face support faster NMS with
torchvision:pip install torch torchvisionthen usenms_mode='torchvision'
Model Downloads and Cache
Models are downloaded automatically on first use and verified via SHA-256.
Default cache location: ~/.uniface/models
Override with the programmatic API or environment variable:
from uniface.model_store import get_cache_dir, set_cache_dir
set_cache_dir('/data/models')
print(get_cache_dir()) # /data/models
export UNIFACE_CACHE_DIR=/data/models
Quick Example (Detection)
import cv2
from uniface.detection import RetinaFace
detector = RetinaFace()
image = cv2.imread("photo.jpg")
if image is None:
raise ValueError("Failed to load image. Check the path to 'photo.jpg'.")
faces = detector.detect(image)
for face in faces:
print(f"Confidence: {face.confidence:.2f}")
print(f"BBox: {face.bbox}")
print(f"Landmarks: {face.landmarks.shape}")
<div align="center">
<img src="https://raw.githubusercontent.com/yakhyo/uniface/main/assets/test_result.png" width="90%">
<p>Face Detection Model Output</p>
</div>
Example (Face Analyzer)
import cv2
from uniface.analyzer import FaceAnalyzer
from uniface.detection import RetinaFace
from uniface.recognition import ArcFace
detector = RetinaFace()
recognizer = ArcFace()
analyzer = FaceAnalyzer(detector, recognizer=recognizer)
image = cv2.imread("photo.jpg")
if image is None:
raise ValueError("Failed to load image. Check the path to 'photo.jpg'.")
faces = analyzer.analyze(image)
for face in faces:
print(face.bbox, face.embedding.shape if face.embedding is not None else None)
Execution Providers (ONNX Runtime)
from uniface.detection import RetinaFace
# Force CPU-only inference
detector = RetinaFace(providers=["CPUExecutionProvider"])
See more in the docs: https://yakhyo.github.io/uniface/concepts/execution-providers/
Documentation
Full documentation: https://yakhyo.github.io/uniface/
| Resource | Description | |----------|-------------| | Quickstart | Get up and running in 5 minutes | | Model Zoo | All models, benchmarks, and selection guide | | API Reference | Detailed module documentation | | Tutorials | Step-by-step workflow examples | | Guides | Architecture and design principles | | Datasets | Training data and evaluation benchmarks |
Datasets
| Task | Training Dataset | Models | |------|-----------------|--------| | Detection | WIDER FACE | RetinaFace, SCRFD, YOLOv5-Face, YOLOv8-Face | | Recognition | MS1MV2 | MobileFace, SphereFace | | Recognition | WebFace600K | ArcFace | | Recognition | WebFace4M / 12M | AdaFace | | Gaze | Gaze360 | MobileGaze | | Parsing | CelebAMask-HQ | BiSeNet | | Attributes | CelebA, FairFace, AffectNet | AgeGender, FairFace, Emotion |
See Datasets documentation for download links, benchmarks, and details.
Jupyter Notebooks
| Example | Colab | Description |
|---------|:-----:|-------------|
| 01_face_detection.ipynb | | Face detection and landmarks |
| 02_face_alignment.ipynb |
| Face alignment for recognition |
| 03_face_verification.ipynb |
| Compare faces for identity |
| 04_face_search.ipynb |
| Find a person in group photos |
| 05_face_analyzer.ipynb |
| All-in-one analysis |
| 06_face_parsing.ipynb |
| Semantic face segmentation |
| 07_face_anonymization.ipynb |
| Privacy-preserving blur |
| 08_gaze_estimation.ipynb |
| Gaze direction estimation |
| 09_face_segmentation.ipynb |
| Face segmentation with XSeg |
| 10_face_vector_store.ipynb |
| FAISS-backed face database |
Licensing and Model Usage
UniFace is MIT-licensed, but several pretrained models carry their own licenses. Review: https://yakhyo.github.io/uniface/license-attribution/
Notable examples:
- YOLOv5-Face and YOLOv8-Face weights are GPL-3.0
- FairFace weights are CC BY 4.0
If you plan commercial use, verify model license compatibility.
References
| Feature | Repository | Training | Description | |---------|------------|:--------:|-------------| | Detection | retinaface-pytorch | ✓ | RetinaFace PyTorch Training & Export | | Detection | yolov5-face-onnx-inference | - | YOLOv5-Face ONNX Inference | | Detection | [yolov8-face-onnx-inference](https://github.com/yakhyo/yolov8-fac
Related Skills
node-connect
334.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
82.2kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
334.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
82.2kCommit, push, and open a PR
