PersonaLive
๐ญ Expressive Portrait Animation for Live Streaming - Real-time webcam animation, offline video processing, multi-language UI, REST API, MCP support
Install / Use
/learn @neosun100/PersonaLiveREADME
๐ญ PersonaLive
Expressive Portrait Animation for Live Streaming
English | ็ฎไฝไธญๆ | ็น้ซไธญๆ | ๆฅๆฌ่ช
<img src="assets/demo_3.gif" width="45%"> ย ย <img src="assets/demo_2.gif" width="40%">
</div>โจ Features
- ๐ฅ Real-time Animation - Drive portrait animation with webcam in real-time
- ๐ Offline Processing - Generate animation videos from reference image + driving video
- ๐ Multi-language UI - English, ็ฎไฝไธญๆ, ็น้ซไธญๆ, ๆฅๆฌ่ช
- ๐ Dark Mode - Eye-friendly dark theme support
- ๐ธ Screenshot & Recording - Capture and record animation output
- ๐ฅ๏ธ Fullscreen Mode - Immersive fullscreen experience
- ๐ GPU Monitoring - Real-time GPU status and memory management
- ๐ REST API - Full API with Swagger documentation
- ๐ค MCP Support - Model Context Protocol for AI assistants
๐ Quick Start
Docker (Recommended)
# Pull all-in-one image (includes all model weights)
docker pull neosun/personalive:allinone
# Run
docker run -d --gpus all -p 7870:7870 --name personalive neosun/personalive:allinone
# Access
open http://localhost:7870
Docker Compose
services:
personalive:
image: neosun/personalive:allinone
ports:
- "7870:7870"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
docker compose up -d
๐ฆ Installation
Prerequisites
- NVIDIA GPU with 12GB+ VRAM
- Docker with NVIDIA Container Toolkit
- Or: Python 3.10, CUDA 12.1
Method 1: Docker All-in-One (Easiest)
docker pull neosun/personalive:allinone
docker run -d --gpus all -p 7870:7870 neosun/personalive:allinone
Method 2: Docker with Volume Mount
# Clone repo
git clone https://github.com/neosun100/personalive.git
cd personalive
# Download weights
python tools/download_weights.py
# Run with mounted weights
docker run -d --gpus all -p 7870:7870 \
-v $(pwd)/pretrained_weights:/app/pretrained_weights \
neosun/personalive:latest
Method 3: Local Development
# Clone
git clone https://github.com/neosun100/personalive.git
cd personalive
# Create environment
conda create -n personalive python=3.10
conda activate personalive
# Install dependencies
pip install -r requirements_base.txt
pip install -r requirements_api.txt
# Download weights
python tools/download_weights.py
# Build frontend
cd webcam/frontend && npm install && npm run build && cd ../..
# Run
python app.py
โ๏ธ Configuration
Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| PORT | 7870 | Server port |
| HOST | 0.0.0.0 | Listen address |
| GPU_IDLE_TIMEOUT | 600 | GPU idle timeout (seconds) |
| ACCELERATION | xformers | Acceleration mode (none/xformers/tensorrt) |
Example .env
PORT=7870
HOST=0.0.0.0
GPU_IDLE_TIMEOUT=600
ACCELERATION=xformers
๐ Usage
Web UI
- Open http://localhost:7870
- Select or upload a reference portrait
- Click "Fuse Reference" to prepare the model
- Allow webcam access and click "Start Animation"
- Move your face to drive the animation!
Offline Mode
- Switch to "Offline Mode" tab
- Upload reference image (PNG/JPG)
- Upload driving video (MP4)
- Set max frames and click "Process"
- Download the result video
REST API
# Health check
curl http://localhost:7870/health
# GPU status
curl http://localhost:7870/api/gpu/status
# Offline processing
curl -X POST http://localhost:7870/api/process/offline \
-F "reference_image=@portrait.png" \
-F "driving_video=@video.mp4"
Full API documentation: http://localhost:7870/docs
๐ ๏ธ Tech Stack
- Backend: FastAPI, PyTorch, Diffusers
- Frontend: SvelteKit, TailwindCSS
- AI Models: Stable Diffusion, LivePortrait
- Acceleration: xFormers, TensorRT (optional)
๐ Project Structure
personalive/
โโโ app.py # Main application
โโโ gpu_manager.py # GPU resource manager
โโโ mcp_server.py # MCP server
โโโ src/ # Core models
โโโ webcam/ # Frontend & streaming
โโโ configs/ # Configuration files
โโโ tools/ # Utility scripts
โโโ pretrained_weights/ # Model weights
๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing) - Open a Pull Request
๐ Changelog
v1.0.0 (2026-01-04)
- ๐ Initial release
- โจ Real-time webcam animation
- โจ Offline video processing
- โจ Multi-language UI (EN/CN/TW/JP)
- โจ Dark mode support
- โจ Screenshot & recording
- โจ REST API with Swagger
- โจ MCP support
- ๐ณ Docker all-in-one image
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
โญ Star History
๐ฑ Follow Us
<div align="center"> <img src="https://img.aws.xin/uPic/ๆซ็ _ๆ็ดข่ๅไผ ๆญๆ ทๅผ-ๆ ๅ่ฒ็.png" width="200"> </div>๐ Acknowledgements
Based on PersonaLive by GVC Lab. Special thanks to the original authors.
Related Skills
openhue
353.3kControl Philips Hue lights and scenes via the OpenHue CLI.
sag
353.3kElevenLabs text-to-speech with mac-style say UX.
weather
353.3kGet current weather and forecasts via wttr.in or Open-Meteo
casdoor
13.3kAn open-source AI-first Identity and Access Management (IAM) /AI MCP & agent gateway and auth server with web UI supporting OpenClaw, MCP, OAuth, OIDC, SAML, CAS, LDAP, SCIM, WebAuthn, TOTP, MFA, Face ID, Google Workspace, Azure AD
