LAM
[SIGGRAPH 2025] LAM: Large Avatar Model for One-shot Animatable Gaussian Head
Install / Use
/learn @aigc3d/LAMREADME
LAM: Official Pytorch Implementation
<p align="center"> <strong>English | <a href="README_CN.md">中文</a></strong> </p> <p align="center"> <img src="./assets/images/logo.jpeg" width="20%"> </p><p align="center"> LAM: Large Avatar Model for One-shot Animatable Gaussian Head </p>
<p align="center"> SIGGRAPH 2025 </p>
<p align="center"> Yisheng He*, Xiaodong Gu*, Xiaodan Ye, Chao Xu, Zhengyi Zhao, Yuan Dong†, Weihao Yuan†, Zilong Dong, Liefeng Bo </p>
<p align="center"> Tongyi Lab, Alibaba Group</p>
<p align="center"> "Build 3D Interactive Chatting Avatar with One Image in Seconds!" </p>
<p align="center"> <img src="./assets/images/teaser.jpg" width="100%"> </p>Core Highlights 🔥🔥🔥
- Ultra-realistic 3D Avatar Creation from One Image in Seconds
- Super-fast Cross-platform Animating and Rendering on Any Devices
- Low-latency SDK for Realtime Interactive Chatting Avatar
📢 News
[September 9, 2025] We have released the technical report of PanoLAM!
[May 20, 2025] We have released the WebGL-Render!
[May 10, 2025] The ModelScope Demo now supports directly exporting the generated Avatar to files required by OpenAvatarChat for interactive chatting!
[April 30, 2025] We have released a Avatar Export Feature that allows users to chat with any LAM-generated 3D digital humans on OpenAvatarChat. 🔥 <br>
[April 21, 2025] We have released the WebGL Interactive Chatting Avatar SDK on OpenAvatarChat (including LLM, ASR, TTS, Avatar), with which you can freely chat with the 3D Digital Human generated by LAM ! 🔥 <br>
[April 19, 2025] We have released the Audio2Expression model, which can animate the generated LAM Avatar with audio input ! 🔥 <br>
<!-- **[April 10, 2025]** We have released the demo on [ModelScope](https://www.modelscope.cn/studios/Damo_XR_Lab/LAM_Large_Avatar_Model) Space ! <br> -->To do list
- [x] Release LAM-small trained on VFHQ and Nersemble.
- [x] Release Huggingface space.
- [x] Release Modelscope space.
- [ ] Release LAM-large trained on a self-constructed large dataset.
- [x] Release WebGL Render for cross-platform animation and rendering.
- [x] Release audio driven model: Audio2Expression.
- [x] Release Interactive Chatting Avatar SDK with OpenAvatarChat, including LLM, ASR, TTS, Avatar.
🚀 Get Started
Online Demo
Avatar Generation from One Image:
Interactive Chatting:
Environment Setup
We provide a one-click installation package on Windows (Cuda 12.8), supported by "十字鱼". Video Download Link
Linux:
git clone https://github.com/aigc3d/LAM.git
cd LAM
# Install with Cuda 12.1
sh ./scripts/install/install_cu121.sh
# Or Install with Cuda 11.8
sh ./scripts/install/install_cu118.sh
Windows:
For Windows, please refer to the Windows Install Guide.
Model Weights
| Model | Training Data | HuggingFace | ModelScope | Reconstruction Time | A100 (A & R) | XiaoMi 14 Phone (A & R) | |---------|--------------------------------|----------|----------|---------------------|-----------------------------|-----------| | LAM-20K | VFHQ | TBD | TBD | 1.4 s | 562.9FPS | 110+FPS | | LAM-20K | VFHQ + NeRSemble | Link | Link | 1.4 s | 562.9FPS | 110+FPS | | LAM-20K | Our large dataset | TBD | TBD | 1.4 s | 562.9FPS | 110+FPS |
(A & R: Animating & Rendering )
HuggingFace Download
# Download Assets
huggingface-cli download 3DAIGC/LAM-assets --local-dir ./tmp
tar -xf ./tmp/LAM_assets.tar && rm ./tmp/LAM_assets.tar
tar -xf ./tmp/thirdparty_models.tar && rm -r ./tmp/
# Download Model Weights
huggingface-cli download 3DAIGC/LAM-20K --local-dir ./model_zoo/lam_models/releases/lam/lam-20k/step_045500/
ModelScope Download
pip3 install modelscope
# Download Assets
modelscope download --model "Damo_XR_Lab/LAM-assets" --local_dir "./tmp/"
tar -xf ./tmp/LAM_assets.tar && rm ./tmp/LAM_assets.tar
tar -xf ./tmp/thirdparty_models.tar && rm -r ./tmp/
# Download Model Weights
modelscope download "Damo_XR_Lab/LAM-20K" --local_dir "./model_zoo/lam_models/releases/lam/lam-20k/step_045500/"
Gradio Run
python app_lam.py
If you want to export ZIP files for real-time conversations on OpenAvatarChat, please refer to the Guide.
python app_lam.py --blender_path /path/blender
Inference
sh ./scripts/inference.sh ${CONFIG} ${MODEL_NAME} ${IMAGE_PATH_OR_FOLDER} ${MOTION_SEQ}
Acknowledgement
This work is built on many amazing research works and open-source projects:
Thanks for their excellent works and great contribution.
More Works
Welcome to follow our other interesting works:
Citation
@inproceedings{he2025lam,
title={LAM: Large Avatar Model for One-shot Animatable Gaussian Head},
author={He, Yisheng and Gu, Xiaodong and Ye, Xiaodan and Xu, Chao and Zhao, Zhengyi and Dong, Yuan and Yuan, Weihao and Dong, Zilong and Bo, Liefeng},
booktitle={Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers},
pages={1--13},
year={2025}
}
Related Skills
node-connect
339.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
83.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
339.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
83.9kCommit, push, and open a PR
