SkillAgentSearch skills...

PersonaLive

๐ŸŽญ Expressive Portrait Animation for Live Streaming - Real-time webcam animation, offline video processing, multi-language UI, REST API, MCP support

Install / Use

/learn @neosun100/PersonaLive
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Cursor

README

<div align="center">

๐ŸŽญ PersonaLive

Expressive Portrait Animation for Live Streaming

Docker License Python CUDA

English | ็ฎ€ไฝ“ไธญๆ–‡ | ็น้ซ”ไธญๆ–‡ | ๆ—ฅๆœฌ่ชž

<img src="assets/demo_3.gif" width="45%"> ย ย  <img src="assets/demo_2.gif" width="40%">

</div>

โœจ Features

  • ๐ŸŽฅ Real-time Animation - Drive portrait animation with webcam in real-time
  • ๐Ÿ“ Offline Processing - Generate animation videos from reference image + driving video
  • ๐ŸŒ Multi-language UI - English, ็ฎ€ไฝ“ไธญๆ–‡, ็น้ซ”ไธญๆ–‡, ๆ—ฅๆœฌ่ชž
  • ๐ŸŒ™ Dark Mode - Eye-friendly dark theme support
  • ๐Ÿ“ธ Screenshot & Recording - Capture and record animation output
  • ๐Ÿ–ฅ๏ธ Fullscreen Mode - Immersive fullscreen experience
  • ๐Ÿ“Š GPU Monitoring - Real-time GPU status and memory management
  • ๐Ÿ”Œ REST API - Full API with Swagger documentation
  • ๐Ÿค– MCP Support - Model Context Protocol for AI assistants

๐Ÿš€ Quick Start

Docker (Recommended)

# Pull all-in-one image (includes all model weights)
docker pull neosun/personalive:allinone

# Run
docker run -d --gpus all -p 7870:7870 --name personalive neosun/personalive:allinone

# Access
open http://localhost:7870

Docker Compose

services:
  personalive:
    image: neosun/personalive:allinone
    ports:
      - "7870:7870"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
docker compose up -d

๐Ÿ“ฆ Installation

Prerequisites

  • NVIDIA GPU with 12GB+ VRAM
  • Docker with NVIDIA Container Toolkit
  • Or: Python 3.10, CUDA 12.1

Method 1: Docker All-in-One (Easiest)

docker pull neosun/personalive:allinone
docker run -d --gpus all -p 7870:7870 neosun/personalive:allinone

Method 2: Docker with Volume Mount

# Clone repo
git clone https://github.com/neosun100/personalive.git
cd personalive

# Download weights
python tools/download_weights.py

# Run with mounted weights
docker run -d --gpus all -p 7870:7870 \
  -v $(pwd)/pretrained_weights:/app/pretrained_weights \
  neosun/personalive:latest

Method 3: Local Development

# Clone
git clone https://github.com/neosun100/personalive.git
cd personalive

# Create environment
conda create -n personalive python=3.10
conda activate personalive

# Install dependencies
pip install -r requirements_base.txt
pip install -r requirements_api.txt

# Download weights
python tools/download_weights.py

# Build frontend
cd webcam/frontend && npm install && npm run build && cd ../..

# Run
python app.py

โš™๏ธ Configuration

Environment Variables

| Variable | Default | Description | |----------|---------|-------------| | PORT | 7870 | Server port | | HOST | 0.0.0.0 | Listen address | | GPU_IDLE_TIMEOUT | 600 | GPU idle timeout (seconds) | | ACCELERATION | xformers | Acceleration mode (none/xformers/tensorrt) |

Example .env

PORT=7870
HOST=0.0.0.0
GPU_IDLE_TIMEOUT=600
ACCELERATION=xformers

๐Ÿ“– Usage

Web UI

  1. Open http://localhost:7870
  2. Select or upload a reference portrait
  3. Click "Fuse Reference" to prepare the model
  4. Allow webcam access and click "Start Animation"
  5. Move your face to drive the animation!

Offline Mode

  1. Switch to "Offline Mode" tab
  2. Upload reference image (PNG/JPG)
  3. Upload driving video (MP4)
  4. Set max frames and click "Process"
  5. Download the result video

REST API

# Health check
curl http://localhost:7870/health

# GPU status
curl http://localhost:7870/api/gpu/status

# Offline processing
curl -X POST http://localhost:7870/api/process/offline \
  -F "reference_image=@portrait.png" \
  -F "driving_video=@video.mp4"

Full API documentation: http://localhost:7870/docs


๐Ÿ› ๏ธ Tech Stack

  • Backend: FastAPI, PyTorch, Diffusers
  • Frontend: SvelteKit, TailwindCSS
  • AI Models: Stable Diffusion, LivePortrait
  • Acceleration: xFormers, TensorRT (optional)

๐Ÿ“ Project Structure

personalive/
โ”œโ”€โ”€ app.py                 # Main application
โ”œโ”€โ”€ gpu_manager.py         # GPU resource manager
โ”œโ”€โ”€ mcp_server.py          # MCP server
โ”œโ”€โ”€ src/                   # Core models
โ”œโ”€โ”€ webcam/                # Frontend & streaming
โ”œโ”€โ”€ configs/               # Configuration files
โ”œโ”€โ”€ tools/                 # Utility scripts
โ””โ”€โ”€ pretrained_weights/    # Model weights

๐Ÿค Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing)
  5. Open a Pull Request

๐Ÿ“‹ Changelog

v1.0.0 (2026-01-04)

  • ๐ŸŽ‰ Initial release
  • โœจ Real-time webcam animation
  • โœจ Offline video processing
  • โœจ Multi-language UI (EN/CN/TW/JP)
  • โœจ Dark mode support
  • โœจ Screenshot & recording
  • โœจ REST API with Swagger
  • โœจ MCP support
  • ๐Ÿณ Docker all-in-one image

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


โญ Star History

Star History Chart


๐Ÿ“ฑ Follow Us

<div align="center"> <img src="https://img.aws.xin/uPic/ๆ‰ซ็ _ๆœ็ดข่”ๅˆไผ ๆ’ญๆ ทๅผ-ๆ ‡ๅ‡†่‰ฒ็‰ˆ.png" width="200"> </div>

๐Ÿ™ Acknowledgements

Based on PersonaLive by GVC Lab. Special thanks to the original authors.

Related Skills

View on GitHub
GitHub Stars6
CategoryCustomer
Updated18d ago
Forks2

Languages

Python

Security Score

85/100

Audited on Mar 22, 2026

No findings