SkillAgentSearch skills...

LHM

[ICCV2025] LHM: Large Animatable Human Reconstruction Model from a Single Image in Seconds

Install / Use

/learn @aigc3d/LHM
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<span><img src="./assets/LHM_logo_parsing.png" height="35" style="vertical-align: top;"> - Official PyTorch Implementation</span>

<p align="center"> Lingteng Qiu<sup>*</sup>, Xiaodong Gu<sup>*</sup>, Peihao Li<sup>*</sup>, Qi Zuo<sup>*</sup>, Weichao Shen, Junfei Zhang, Kejie Qiu, Weihao Yuan<br> Guanying Chen<sup>+</sup>, Zilong Dong<sup>+</sup>, Liefeng Bo</p>
<p align="center"> Tongyi Lab, Alibaba Group <br> ICCV 2025 </p>

Project Website arXiv Paper HuggingFace ModelScope MotionShop2 Apache License

<p align="center"> <img src="./assets/LHM_teaser.png" heihgt="100%"> </p>

如果您熟悉中文,可以阅读中文版本的README

📢 Latest Updates

[March 2026] LHM++ is now open-sourced! Supports arbitrary view inputs with higher efficiency—8-view input runs on just 8GB GPU memory—and superior rendering quality. See GitHub | arXiv <br> [June 26, 2025] LHM is got accepted by ICCV2025!!! <br> [April 16, 2025] We have released a memory-saving version of motion and LHM. Now you can run the entire pipeline on 14 GB GPUs. <br> [April 13, 2025] We have released LHM-MINI, which allows you to run LHM on 16 GB GPUs. 🔥🔥🔥 <br> [April 10, 2025] We release the motion extraction node and animation infer node of LHM on ComfyUI. With a extracted offline motion, you can generate a 10s animation clip in 20s!!! Update your ComfyUI branch right now.🔥🔥🔥 <br> [April 9, 2025] we build a detailed tutorial to guide users to install LHM-ComfyUI on Windows step by step!<br> [April 9, 2025] We release the video processing pipeline to create your training data LHM_Track!<br>

For more details about the updates, see 👉 👉 👉 logger.

TODO List

  • [x] Core Inference Pipeline (v0.1) 🔥🔥🔥
  • [x] HuggingFace Demo Integration 🤗🤗🤗
  • [x] ModelScope Deployment
  • [x] Motion Processing Scripts
  • [ ] Release Training data & Testing Data (License Available)
  • [ ] Training Codes Release

🚀 Getting Started

We provide a video that teaches us how to install LHM and LHM-ComfyUI step by step on YouTube, submitted by softicelee2.

We provide a video that teaches us how to install LHM step by step on bilibili, submitted by 站长推荐推荐.

We provide a video that teaches us how to install LHM-ComfyUI step by step on bilibili, submitted by 站长推荐推荐.

Build from Docker

Please sure you had install nvidia-docker in our system.

# Linux System only
# CUDA 121
# step0. download docker images
wget -P ./lhm_cuda_dockers https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/for_lingteng/LHM/LHM_Docker/lhm_cuda121.tar 

# step1. build from docker file
sudo docker load -i  ./lhm_cuda_dockers/lhm_cuda121.tar 

# step2. run docker_file and open the communication port 7860
sudo docker run -p 7860:7860 -v PATH/FOLDER:DOCKER_WORKSPACES -it lhm:cuda_121 /bin/bash

Environment Setup

Clone the repository.

git clone git@github.com:aigc3d/LHM.git
cd LHM

Windows Installation

Set Up a Virtual Environment Open Command Prompt (CMD), navigate to the project folder, and run:

python -m venv lhm_env
lhm_env\Scripts\activate
install_cu121.bat

python ./app.py
# cuda 11.8
pip install rembg
sh ./install_cu118.sh

# cuda 12.1
sh ./install_cu121.sh

The installation has been tested with python3.10, CUDA 11.8 or CUDA 12.1. Or you can install dependencies step by step, following INSTALL.md.

Model Weights

<span style="color:red">Please note that the model will be downloaded automatically if you do not download it yourself.</span>

| Model | Training Data | BH-T Layers | ModelScope| HuggingFace |Inference Time|input requirement| | :--- | :--- | :--- | :--- | :--- | :--- |:--- | | LHM-MINI | 300K Videos + 5K Synthetic Data | 2 | ModelScope |huggingface| 1.41 s | half & full body| | LHM-500M | 300K Videos + 5K Synthetic Data | 5 | ModelScope |huggingface| 2.01 s | full body| | LHM-500M-HF | 300K Videos + 5K Synthetic Data | 5 | ModelScope |huggingface| 2.01 s | half & full body| | LHM-1.0B | 300K Videos + 5K Synthetic Data | 15 | ModelScope |huggingface| 6.57 s | full body| | LHM-1B-HF | 300K Videos + 5K Synthetic Data | 15 | ModelScope |huggingface| 6.57 s | half & full body|

Model cards with additional details can be found in model_card.md.

Download from HuggingFace

from huggingface_hub import snapshot_download 
model_dir = snapshot_download(repo_id='3DAIGC/LHM-MINI', cache_dir='./pretrained_models/huggingface')
# 500M-HF Model
model_dir = snapshot_download(repo_id='3DAIGC/LHM-500M-HF', cache_dir='./pretrained_models/huggingface')
# 1B-HF Model
model_dir = snapshot_download(repo_id='3DAIGC/LHM-1B-HF', cache_dir='./pretrained_models/huggingface')

Download from ModelScope


from modelscope import snapshot_download
model_dir = snapshot_download(model_id='Damo_XR_Lab/LHM-MINI', cache_dir='./pretrained_models')
# 500M-HF Model
model_dir = snapshot_download(model_id='Damo_XR_Lab/LHM-500M-HF', cache_dir='./pretrained_models')
# 1B-HF Model
model_dir = snapshot_download(model_id='Damo_XR_Lab/LHM-1B-HF', cache_dir='./pretrained_models')

Download Prior Model Weights

# Download prior model weights
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/LHM/LHM_prior_model.tar 
tar -xvf LHM_prior_model.tar 

Data Motion Preparation

We provide the test motion examples, we will update the processing scripts ASAP :).

# Download prior model weights
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/LHM/motion_video.tar
tar -xvf ./motion_video.tar 

After downloading weights and data, the folder of the project structure seems like:

├── configs
│   ├── inference
│   ├── accelerate-train-1gpu.yaml
│   ├── accelerate-train-deepspeed.yaml
│   ├── accelerate-train.yaml
│   └── infer-gradio.yaml
├── engine
│   ├── BiRefNet
│   ├── pose_estimation
│   ├── SegmentAPI
├── example_data
│   └── test_data
├── exps
│   ├── releases
├── LHM
│   ├── datasets
│   ├── losses
│   ├── models
│   ├── outputs
│   ├── runners
│   ├── utils
│   ├── launch.py
├── pretrained_models
│   ├── dense_sample_points
│   ├── gagatracker
│   ├── human_model_files
│   ├── sam2
│   ├── sapiens
│   ├── voxel_grid
│   ├── arcface_resnet18.pth
│   ├── BiRefNet-general-epoch_244.pth
├── scripts
│   ├── exp
│   ├── convert_hf.py
│   └── upload_hub.py
├── tools
│   ├── metrics
├── train_data
│   ├── example_imgs
│   ├── motion_video
├── inference.sh
├── README.md
├── requirements.txt

💻 Local Gradio Run

Now, we support user motion sequence input. As the pose estimator requires some GPU memory, this Gradio application requires at least 24 GB of GPU memory to run LHM-500M.


# Memory-saving version; More time available for Use.
# The maximum supported length for 720P video is 20s.
python ./app_motion_ms.py  
python ./app_motion_ms.py  --model_name LHM-1B-HF


# Support user motion sequence input. As the pose estimator requires some GPU memory, this Gradio application requires at least 24 GB of GPU memory to run LHM-500M.
python ./app_motion.py  
python ./app_motion.py  --model_name LHM-1B-HF

# preprocessing video sequence
python ./app.py
python ./app.py --model_name LHM-1B

🏃 Inference Pipeline

Now we support upper-body image input! <img src="./assets/half_input.gif" width="75%" height="auto"/>

# MODEL_NAME={LHM-500M-HF, LHM-500M, LHM-1B, LHM-1B-HF}
# bash ./inference.sh LHM-500M-HF ./train_data/example_imgs/ ./train_data/motion_video/mimo1/smplx_params
# bash ./inference.sh LHM-500M ./train_data/example_imgs/ ./train_data/motion_video/mimo1/smplx_params
# bash ./inference.sh LHM-1B ./train_data/example_imgs/ ./train_data/motion_video/mimo1/smplx_params

# animation
bash inference.sh ${MODEL_NAME} ${IMAGE_PATH_OR_FOLDER}  ${MO
View on GitHub
GitHub Stars2.6k
CategoryDevelopment
Updated27m ago
Forks207

Languages

Python

Security Score

100/100

Audited on Apr 10, 2026

No findings