SkillAgentSearch skills...

Echomimic

[AAAI 2025] EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning

Install / Use

/learn @antgroup/Echomimic

README

<h1 align='center'>EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning</h1> <div align='center'> <a href='https://github.com/yuange250' target='_blank'>Zhiyuan Chen</a><sup>1</sup>&emsp; <a href='https://github.com/JoeFannie' target='_blank'>Jiajiong Cao</a><sup>1</sup>&emsp; <a href='https://github.com/octavianChen' target='_blank'>Zhiquan Chen</a>&emsp; <a href='https://lymhust.github.io/' target='_blank'>Yuming Li</a><sup>2</sup>&emsp; <a href='https://openreview.net/profile?id=~Chenguang_Ma3' target='_blank'>Chenguang Ma</a><sup>2</sup> </div> <div align='center'> <sup>1</sup>Equal Contribution&emsp; <sup>2</sup>Corresponding Authors </div> <div align='center'> Terminal Technology Department, Alipay, Ant Group. </div> <br> <div align='center'> <a href='https://antgroup.github.io/ai/echomimic/'><img src='https://img.shields.io/badge/Project-Page-blue'></a> <a href='https://huggingface.co/BadToBest/EchoMimic'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a> <a href='https://huggingface.co/spaces/BadToBest/EchoMimic'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Demo-yellow'></a> <a href='https://www.modelscope.cn/models/BadToBest/EchoMimic'><img src='https://img.shields.io/badge/ModelScope-Model-purple'></a> <a href='https://www.modelscope.cn/studios/BadToBest/BadToBest'><img src='https://img.shields.io/badge/ModelScope-Demo-purple'></a> <a href='https://arxiv.org/abs/2407.08136'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> </div>

🚀 EchoMimic Series

  • EchoMimicV1: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning. GitHub
  • EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation. GitHub
  • EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation. GitHub

📣 Updates

  • [2024.12.10] 🔥 EchoMimic is accepted by AAAI 2025.
  • [2024.11.21] 🔥🔥🔥 We release our EchoMimicV2 codes and models.
  • [2024.08.02] 🔥 EchoMimic is now available on huggingface with A100 GPU. Thanks Wenmeng Zhou@ModelScope.
  • [2024.07.25] 🔥🔥🔥 Accelerated models and pipe on Audio Driven are released. The inference speed can be improved by 10x (from ~7mins/240frames to ~50s/240frames on V100 GPU)
  • [2024.07.23] 🔥 EchoMimic gradio demo on modelscope is ready.
  • [2024.07.23] 🔥 EchoMimic gradio demo on huggingface is ready. Thanks Sylvain Filoni@fffiloni.
  • [2024.07.17] 🔥🔥🔥 Accelerated models and pipe on Audio + Selected Landmarks are released. The inference speed can be improved by 10x (from ~7mins/240frames to ~50s/240frames on V100 GPU)
  • [2024.07.14] 🔥 ComfyUI is now available. Thanks @smthemex for the contribution.
  • [2024.07.13] 🔥 Thanks NewGenAI for the video installation tutorial.
  • [2024.07.13] 🔥 We release our pose&audio driven codes and models.
  • [2024.07.12] 🔥 WebUI and GradioUI versions are released. We thank @greengerong @Robin021 and @O-O1024 for their contributions.
  • [2024.07.12] 🔥 Our paper is in public on arxiv.
  • [2024.07.09] 🔥 We release our audio driven codes and models.

🌅 Gallery

Audio Driven (Sing)

<table class="center"> <tr> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/d014d921-9f94-4640-97ad-035b00effbfe" muted="false"></video> </td> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/877603a5-a4f9-4486-a19f-8888422daf78" muted="false"></video> </td> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/e0cb5afb-40a6-4365-84f8-cb2834c4cfe7" muted="false"></video> </td> </tr> </table>

Audio Driven (English)

<table class="center"> <tr> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/386982cd-3ff8-470d-a6d9-b621e112f8a5" muted="false"></video> </td> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/5c60bb91-1776-434e-a720-8857a00b1501" muted="false"></video> </td> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/1f15adc5-0f33-4afa-b96a-2011886a4a06" muted="false"></video> </td> </tr> </table>

Audio Driven (Chinese)

<table class="center"> <tr> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/a8092f9a-a5dc-4cd6-95be-1831afaccf00" muted="false"></video> </td> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/c8b5c59f-0483-42ef-b3ee-4cffae6c7a52" muted="false"></video> </td> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/532a3e60-2bac-4039-a06c-ff6bf06cb4a4" muted="false"></video> </td> </tr> </table>

Landmark Driven

<table class="center"> <tr> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/1da6c46f-4532-4375-a0dc-0a4d6fd30a39" muted="false"></video> </td> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/d4f4d5c1-e228-463a-b383-27fb90ed6172" muted="false"></video> </td> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/18bd2c93-319e-4d1c-8255-3f02ba717475" muted="false"></video> </td> </tr> </table>

Audio + Selected Landmark Driven

<table class="center"> <tr> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/4a29d735-ec1b-474d-b843-3ff0bdf85f55" muted="false"></video> </td> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/b994c8f5-8dae-4dd8-870f-962b50dc091f" muted="false"></video> </td> <td width=30% style="border: none"> <video controls loop src="https://github.com/antgroup/echomimic/assets/11451501/955c1d51-07b2-494d-ab93-895b9c43b896" muted="false"></video> </td> </tr> </table>

(Some demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.)

⚒️ Installation

Download the Codes

  git clone https://github.com/BadToBest/EchoMimic
  cd EchoMimic

Python Environment Setup

  • Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7
  • Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)
  • Tested Python Version: 3.8 / 3.10 / 3.11

Create conda environment (Recommended):

  conda create -n echomimic python=3.8
  conda activate echomimic

Install packages with pip

  pip install -r requirements.txt

Download ffmpeg-static

Download and decompress ffmpeg-static, then

export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static

Download pretrained weights

git lfs install
git clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights

The pretrained_weights is organized as follows.

./pretrained_weights/
├── denoising_unet.pth
├── reference_unet.pth
├── motion_module.pth
├── face_locator.pth
├── sd-vae-ft-mse
│   └── ...
├── sd-image-variations-diffusers
│   └── ...
└── audio_processor
    └── whisper_tiny.pt

In which denoising_unet.pth / reference_unet.pth / motion_module.pth / face_locator.pth are the main checkpoints of EchoMimic. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:

Audio-Drived Algo Inference

Run the python inference script:

  python -u infer_audio2vid.py
  python -u infer_audio2vid_pose.py

Audio-Drived Algo Inference On Your Own Cases

Edit the inference config file ./configs/prompts/animation.yaml, and add your own case:

test_cases:
  "path/to/your/image":
    - "path/to/your/audio"

The run the python inference script:

  python -u infer_audio2vid.py

Motion Alignment between Ref. Img. and Driven Vid.

(Firstly download the checkpoints with '_pose.pth' postfix from huggingface)

Edit driver_video and ref_image to your path in demo_motion_sync.py, then run

  python -u demo_motion_sync.py

Audio&Pose-Drived Algo Inference

Edit ./configs/prompts/animation_pose.yaml, then run

  python -u infer_audio2vid_pose.py

Pose-Drived Algo Inference

Set draw_mouse=True in line 135 of infer_audio2vid_pose.py. Edit ./configs/prompts/animation_pose.yaml, then run

  python
View on GitHub
GitHub Stars4.2k
CategoryDevelopment
Updated5h ago
Forks461

Languages

Python

Security Score

100/100

Audited on Apr 5, 2026

No findings