MusePose
MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation
Install / Use
/learn @TMElyralab/MusePoseREADME
MusePose
MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation.
Zhengyan Tong, Chao Li, Zhaokang Chen, Bin Wu<sup>†</sup>, Wenjiang Zhou (<sup>†</sup>Corresponding Author, benbinwu@tencent.com)
Lyra Lab, Tencent Music Entertainment
github huggingface space (comming soon) Project (comming soon) Technical report (comming soon)
MusePose is an image-to-video generation framework for virtual human under control signal such as pose. The current released model was an implementation of AnimateAnyone by optimizing Moore-AnimateAnyone.
MusePose is the last building block of the Muse opensource serie. Together with MuseV and MuseTalk, we hope the community can join us and march towards the vision where a virtual human can be generated end2end with native ability of full body movement and interaction. Please stay tuned for our next milestone!
We really appreciate AnimateAnyone for their academic paper and Moore-AnimateAnyone for their code base, which have significantly expedited the development of the AIGC community and MusePose.
Update:
- We release train codes of MusePose now!
Overview
MusePose is a diffusion-based and pose-guided virtual human video generation framework.
Our main contributions could be summarized as follows:
- The released model can generate dance videos of the human character in a reference image under the given pose sequence. The result quality exceeds almost all current open source models within the same topic.
- We release the
pose alignalgorithm so that users could align arbitrary dance videos to arbitrary reference images, which SIGNIFICANTLY improved inference performance and enhanced model usability. - We have fixed several important bugs and made some improvement based on the code of Moore-AnimateAnyone.
Demos
<table class="center"> <tr> <td width=50% style="border: none"> <video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/bb52ca3e-8a5c-405a-8575-7ab42abca248" muted="false"></video> </td> <td width=50% style="border: none"> <video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/6667c9ae-8417-49a1-bbbb-fe1695404c23" muted="false"></video> </td> </tr> <tr> <td width=50% style="border: none"> <video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/7f7a3aaf-2720-4b50-8bca-3257acce4733" muted="false"></video> </td> <td width=50% style="border: none"> <video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/c56f7e9c-d94d-494e-88e6-62a4a3c1e016" muted="false"></video> </td> </tr> <tr> <td width=50% style="border: none"> <video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/00a9faec-2453-4834-ad1f-44eb0ec8247d" muted="false"></video> </td> <td width=50% style="border: none"> <video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/41ad26b3-d477-4975-bf29-73a3c9ed0380" muted="false"></video> </td> </tr> <tr> <td width=50% style="border: none"> <video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/2bbebf98-6805-4f1b-b769-537f69cc0e4b" muted="false"></video> </td> <td width=50% style="border: none"> <video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/1b2b97d0-0ae9-49a6-83ba-b3024ae64f08" muted="false"></video> </td> </tr> </table>News
- [05/27/2024] Release
MusePoseand pretrained models. - [05/31/2024] Support Comfyui-MusePose
- [06/14/2024] Bug Fixed in
inference_v2.yaml. - [03/04/2025] Release train codes.
Todo:
- [x] release our trained models and inference codes of MusePose.
- [x] release pose align algorithm.
- [x] Comfyui-MusePose
- [x] training guidelines.
- [ ] Huggingface Gradio demo.
- [ ] a improved architecture and model (may take longer).
Getting Started
We provide a detailed tutorial about the installation and the basic usage of MusePose for new users:
Installation
To prepare the Python environment and install additional packages such as opencv, diffusers, mmcv, etc., please follow the steps below:
Build environment
We recommend a python version >=3.10 and cuda version =11.7. Then build environment as follows:
pip install -r requirements.txt
mmlab packages
pip install --no-cache-dir -U openmim
mim install mmengine
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
Download weights
You can download weights manually as follows:
-
Download our trained weights.
-
Download the weights of other components:
- sd-image-variations-diffusers
- sd-vae-ft-mse
- dwpose
- yolox - Make sure to rename to
yolox_l_8x8_300e_coco.pth - image_encoder
- control_v11p_sd15_openpose (for training only)
- animatediff (for training only)
Finally, these weights should be organized in pretrained_weights as follows:
./pretrained_weights/
|-- MusePose
| |-- denoising_unet.pth
| |-- motion_module.pth
| |-- pose_guider.pth
| └── reference_unet.pth
|-- dwpose
| |-- dw-ll_ucoco_384.pth
| └── yolox_l_8x8_300e_coco.pth
|-- sd-image-variations-diffusers
| └── unet
| |-- config.json
| └── diffusion_pytorch_model.bin
|-- image_encoder
| |-- config.json
| └── pytorch_model.bin
|-- sd-vae-ft-mse
| |-- config.json
| └── diffusion_pytorch_model.bin
|-- control_v11p_sd15_openpose
| └── diffusion_pytorch_model.bin
└── animatediff
└── mm_sd_v15_v2.ckpt
Quickstart
Inference
Preparation
Prepare your referemce images and dance videos in the folder ./assets and organnized as the example:
./assets/
|-- images
| └── ref.png
└── videos
└── dance.mp4
Pose Alignment
Get the aligned dwpose of the reference image:
python pose_align.py --imgfn_refer ./assets/images/ref.png --vidfn ./assets/videos/dance.mp4
After this, you can see the pose align results in ./assets/poses, where ./assets/poses/align/img_ref_video_dance.mp4 is the aligned dwpose and the ./assets/poses/align_demo/img_ref_video_dance.mp4 is for debug.
Inferring MusePose
Add the path of the reference image and the aligned dwpose to the test config file ./configs/test_stage_2.yaml as the example:
test_cases:
"./assets/images/ref.png":
- "./assets/poses/align/img_ref_video_dance.mp4"
Then, simply run
python test_stage_2.py --config ./configs/test_stage_2.yaml
./configs/test_stage_2.yaml is the path to the inference configuration file.
Finally, you can see the output results in ./output/
Reducing VRAM cost
If you want to reduce the VRAM cost, you could set the width and height for inference. For example,
python test_stage_2.py --config ./configs/test_stage_2.yaml -W 512 -H 512
It will generate the video at 512 x 512 first, and then resize it back to the original size of the pose video.
Currently, it takes 16GB VRAM to run on 512 x 512 x 48 and takes 28GB VRAM to run on 768 x 768 x 48. However, it should be noticed that the inference resolution would affect the final results (especially face region).
Face Enhancement
If you want to enhance the face region to have a better consistency of the face, you could use FaceFusion. You could use the face-swap function to swap the face in the reference image to the generated video.
Training
-
Prepare
First, put all your dance videos in a folder such as./xxx
Next,python extract_dwpose_keypoints.py --video_dir ./xxx. The extracted dwpose_keypoints will be saved in./xxx_dwpose_keypoints.
Then,python draw_dwpose.py --video_dir ./xxx. The rendered dwpose videos will be saved in./xxx_dwpose_without_faceifdraw_face=False. The rendered dwpose videos will be saved in./xxx_dwposeifdraw_face=True.
Finally,python extract_meta_info_multiple_dataset.py --video_dirs ./xxx --dataset_name xxx
You will get a json file to record the path of all data../meta/xxx.json -
Config your accelerate and deepspeed
pip install accelerate
use cmdaccelerate configto config your deepspeed according to your machine. We use zero 2 without any offload and our machine has 8x80GB GPU. -
Config the yaml file for training
stage 1
./configs/train_stage_1.yamlstage 2
./configs/train_stage_2.yaml -
Launch Training
stage 1
`accelerate launch train_stage_1_multiGPU.py --config conf
Related Skills
docs-writer
98.6k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
329.0kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
arscontexta
2.8kClaude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.
be
Assume the personality of the Persona described in any of the document available in the @~/.ai/personas directory.
