AnimateDiff
Official implementation of AnimateDiff.
Install / Use
/learn @guoyww/AnimateDiffREADME
AnimateDiff
This repository is the official implementation of AnimateDiff [ICLR2024 Spotlight]. It is a plug-and-play module turning most community text-to-image models into animation generators, without the need of additional training.
AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
</br>
Yuwei Guo,
Ceyuan Yang✝,
Anyi Rao,
Zhengyang Liang,
Yaohui Wang,
Yu Qiao,
Maneesh Agrawala,
Dahua Lin,
Bo Dai
(✝Corresponding Author)
Note: The main branch is for Stable Diffusion V1.5; for Stable Diffusion XL, please refer sdxl-beta branch.
Quick Demos
More results can be found in the Gallery. Some of them are contributed by the community.
<table class="center"> <tr> <td><img src="__assets__/animations/model_01/01.gif"></td> <td><img src="__assets__/animations/model_01/02.gif"></td> <td><img src="__assets__/animations/model_01/03.gif"></td> <td><img src="__assets__/animations/model_01/04.gif"></td> </tr> </table> <p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/30240/toonyou">ToonYou</a></p> <table> <tr> <td><img src="__assets__/animations/model_03/01.gif"></td> <td><img src="__assets__/animations/model_03/02.gif"></td> <td><img src="__assets__/animations/model_03/03.gif"></td> <td><img src="__assets__/animations/model_03/04.gif"></td> </tr> </table> <p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/4201/realistic-vision-v20">Realistic Vision V2.0</a></p>Quick Start
Note: AnimateDiff is also offically supported by Diffusers. Visit AnimateDiff Diffusers Tutorial for more details. Following instructions is for working with this repository.
Note: For all scripts, checkpoint downloading will be automatically handled, so the script running may take longer time when first executed.
1. Setup repository and environment
git clone https://github.com/guoyww/AnimateDiff.git
cd AnimateDiff
pip install -r requirements.txt
2. Launch the sampling script!
The generated samples can be found in samples/ folder.
2.1 Generate animations with comunity models
python -m scripts.animate --config configs/prompts/1_animate/1_1_animate_RealisticVision.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_2_animate_FilmVelvia.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_3_animate_ToonYou.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_4_animate_MajicMix.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_5_animate_RcnzCartoon.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_6_animate_Lyriel.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_7_animate_Tusun.yaml
2.2 Generate animation with MotionLoRA control
python -m scripts.animate --config configs/prompts/2_motionlora/2_motionlora_RealisticVision.yaml
2.3 More control with SparseCtrl RGB and sketch
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_1_sparsectrl_i2v.yaml
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_2_sparsectrl_rgb_RealisticVision.yaml
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_3_sparsectrl_sketch_RealisticVision.yaml
2.4 Gradio app
We created a Gradio demo to make AnimateDiff easier to use.
By default, the demo will run at localhost:7860.
python -u app.py
<img src="__assets__/figs/gradio.jpg" style="width: 75%">
Technical Explanation
<details close> <summary>Technical Explanation</summary>AnimateDiff
AnimateDiff aims to learn transferable motion priors that can be applied to other variants of Stable Diffusion family. To this end, we design the following training pipeline consisting of three stages.
<img src="__assets__/figs/adapter_explain.png" style="width:100%">-
In 1. Alleviate Negative Effects stage, we train the domain adapter, e.g.,
v3_sd15_adapter.ckpt, to fit defective visual aritfacts (e.g., watermarks) in the training dataset. This can also benefit the distangled learning of motion and spatial appearance. By default, the adapter can be removed at inference. It can also be integrated into the model and its effects can be adjusted by a lora scaler. -
In 2. Learn Motion Priors stage, we train the motion module, e.g.,
v3_sd15_mm.ckpt, to learn the real-world motion patterns from videos. -
In 3. (optional) Adapt to New Patterns stage, we train MotionLoRA, e.g.,
v2_lora_ZoomIn.ckpt, to efficiently adapt motion module for specific motion patterns (camera zooming, rolling, etc.).
SparseCtrl
SparseCtrl aims to add more control to text-to-video models by adopting some sparse inputs (e.g., few RGB images or sketch inputs). Its technicall details can be found in the following paper:
SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models
Yuwei Guo,
Ceyuan Yang✝,
Anyi Rao,
Maneesh Agrawala,
Dahua Lin,
Bo Dai
(✝Corresponding Author)
Model Versions
<details close> <summary>Model Versions</summary>AnimateDiff v3 and SparseCtrl (2023.12)
In this version, we use Domain Adapter LoRA for image model finetuning, which provides more flexiblity at inference. We also implement two (RGB image/scribble) SparseCtrl encoders, which can take abitary number of condition maps to control the animation contents.
<details close> <summary>AnimateDiff v3 Model Zoo</summary>| Name | HuggingFace | Type | Storage | Description |
| - | - | - | - | - |
| v3_adapter_sd_v15.ckpt | Link | Domain Adapter | 97.4 MB | |
| v3_sd15_mm.ckpt.ckpt | Link | Motion Module | 1.56 GB | |
| v3_sd15_sparsectrl_scribble.ckpt | Link | SparseCtrl Encoder | 1.86 GB | scribble condition |
| v3_sd15_sparsectrl_rgb.ckpt | Link | SparseCtrl Encoder | 1.85 GB | RGB image condition |
Limitations
- Small fickering is noticable;
- To stay compatible with comunity models, there is no specific optimizations for general T2V, leading to limited visual quality under this setting;
- (Style Alignment) For usage such as image animation/interpolation, it's recommanded to use images generated by the same community model.
Demos
<table class="center"> <tr style="line-height: 0"> <td width=25% style="border: none; text-align: center">Input (by RealisticVision)</td> <td width=25% style="border: none; text-align: center">Animation</td> <td width=25% style="border: none; text-align: center">Input</td> <td width=25% style="border: none; text-align: center">Animation</td> </tr> <tr> <td width=25% style="border: none"><img src="__assets__/demos/image/RealisticVision_firework.png" style="width:100%"></td> <td width=25% style="border: none"><img src="__assets__/animations/v3/animation_fireworks.gif" style="width:100%"></td> <td width=25% style="border: none"><img src="__assets__/demos/image/RealisticVision_sunset.png" style="width:100%"></td> <td width=25% style="border: none"><img src="__assets__/animations/v3/animation_sunset.gif" style="width:100%"></td> </tr> </table> <table class="center"> <tr style="line-height: 0"> <td width=25% style="border: none; text-align: center">Input Scribble</td> <td width=25% style="border: none; text-align: center">Output</td> <td width=25% style="border: none; text-align: center">Input Scribbles</td> <td width=25% style="border: none; text-align: center">Output</td> </tr> <tr> <td width=25% style="border: none"><img src="__assets__/demos/scribble/scribble_1.png" style="width:100%"></td> <td width=25% style="border: none"><img src="__assets__/animations/v3/sketch_boy.gif" style="width:100%"></td> <td width=25% style="border: none"><img src="__assets__/demos/scribble/scribble_2_readme.png" style="width:100%"></td> <td width=25% style="border: none"><img src="__assets__/animations/v3/sketch_city.gif" style="width:100%"></td> </tr> </table>AnimateDiff SDXL-Beta (2023.11)
Release the Motion Module
Related Skills
node-connect
350.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
350.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
350.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
