SkillAgentSearch skills...

VACE

[ICCV 2025] Official implementations for paper: VACE: All-in-One Video Creation and Editing

Install / Use

/learn @ali-vilab/VACE
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <h1 align="center">VACE: All-in-One Video Creation and Editing</h1> <h3 align="center">(ICCV 2025)</h3> <p align="center"> <strong>Zeyinzi Jiang<sup>*</sup></strong> · <strong>Zhen Han<sup>*</sup></strong> · <strong>Chaojie Mao<sup>*&dagger;</sup></strong> · <strong>Jingfeng Zhang</strong> · <strong>Yulin Pan</strong> · <strong>Yu Liu</strong> <br> <b>Tongyi Lab - <a href="https://github.com/Wan-Video/Wan2.1"><img src='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png' alt='wan_logo' style='margin-bottom: -4px; height: 20px;'></a> </b> <br> <br> <a href="https://arxiv.org/abs/2503.07598"><img src='https://img.shields.io/badge/VACE-arXiv-red' alt='Paper PDF'></a> <a href="https://ali-vilab.github.io/VACE-Page/"><img src='https://img.shields.io/badge/VACE-Project_Page-green' alt='Project Page'></a> <a href="https://huggingface.co/collections/ali-vilab/vace-67eca186ff3e3564726aff38"><img src='https://img.shields.io/badge/VACE-HuggingFace_Model-yellow'></a> <a href="https://modelscope.cn/collections/VACE-8fa5fcfd386e43"><img src='https://img.shields.io/badge/VACE-ModelScope_Model-purple'></a> <br> </p>

Introduction

<strong>VACE</strong> is an all-in-one model designed for video creation and editing. It encompasses various tasks, including reference-to-video generation (<strong>R2V</strong>), video-to-video editing (<strong>V2V</strong>), and masked video-to-video editing (<strong>MV2V</strong>), allowing users to compose these tasks freely. This functionality enables users to explore diverse possibilities and streamlines their workflows effectively, offering a range of capabilities, such as Move-Anything, Swap-Anything, Reference-Anything, Expand-Anything, Animate-Anything, and more.

<img src='./assets/materials/teaser.jpg'>

🎉 News

  • [x] Oct 17, 2025: VACE-Benchmark has been updated to incorporate the evaluation data. VACE-Page also features creative community cases, offering researchers and community members better project insight and tracking.
  • [x] Jun 26, 2025: VACE is accepted by ICCV 2025.
  • [x] May 14, 2025: 🔥Wan2.1-VACE-1.3B and Wan2.1-VACE-14B models are now available at HuggingFace and ModelScope!
  • [x] Mar 31, 2025: 🔥VACE-Wan2.1-1.3B-Preview and VACE-LTX-Video-0.9 models are now available at HuggingFace and ModelScope!
  • [x] Mar 31, 2025: 🔥Release code of model inference, preprocessing, and gradio demos.
  • [x] Mar 11, 2025: We propose VACE, an all-in-one model for video creation and editing.

🪄 Models

| Models | Download Link | Video Size | License | |--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|-----------------------------------------------------------------------------------------------| | VACE-Wan2.1-1.3B-Preview | Huggingface 🤗 ModelScope 🤖 | ~ 81 x 480 x 832 | Apache-2.0 | | VACE-LTX-Video-0.9 | Huggingface 🤗 ModelScope 🤖 | ~ 97 x 512 x 768 | RAIL-M | | Wan2.1-VACE-1.3B | Huggingface 🤗 ModelScope 🤖 | ~ 81 x 480 x 832 | Apache-2.0 | | Wan2.1-VACE-14B | Huggingface 🤗 ModelScope 🤖 | ~ 81 x 720 x 1280 | Apache-2.0 |

  • The input supports any resolution, but to achieve optimal results, the video size should fall within a specific range.
  • All models inherit the license of the original model.

⚙️ Installation

The codebase was tested with Python 3.10.13, CUDA version 12.4, and PyTorch >= 2.5.1.

Setup for Model Inference

You can setup for VACE model inference by running:

git clone https://github.com/ali-vilab/VACE.git && cd VACE
pip install torch==2.5.1 torchvision==0.20.1 --index-url https://download.pytorch.org/whl/cu124  # If PyTorch is not installed.
pip install -r requirements.txt
pip install wan@git+https://github.com/Wan-Video/Wan2.1  # If you want to use Wan2.1-based VACE.
pip install ltx-video@git+https://github.com/Lightricks/LTX-Video@ltx-video-0.9.1 sentencepiece --no-deps # If you want to use LTX-Video-0.9-based VACE. It may conflict with Wan.

Please download your preferred base model to <repo-root>/models/.

Setup for Preprocess Tools

If you need preprocessing tools, please install:

pip install -r requirements/annotator.txt

Please download VACE-Annotators to <repo-root>/models/.

Local Directories Setup

It is recommended to download VACE-Benchmark to <repo-root>/benchmarks/ as examples in run_vace_xxx.sh.

We recommend to organize local directories as:

VACE
├── ...
├── benchmarks
│   └── VACE-Benchmark
│       └── assets
│           └── examples
│               ├── animate_anything
│               │   └── ...
│               └── ...
├── models
│   ├── VACE-Annotators
│   │   └── ...
│   ├── VACE-LTX-Video-0.9
│   │   └── ...
│   └── VACE-Wan2.1-1.3B-Preview
│       └── ...
└── ...

🚀 Usage

In VACE, users can input text prompt and optional video, mask, and image for video generation or editing. Detailed instructions for using VACE can be found in the User Guide.

Inference CIL

1) End-to-End Running

To simply run VACE without diving into any implementation details, we suggest an end-to-end pipeline. For example:

# run V2V depth
python vace/vace_pipeline.py --base wan --task depth --video assets/videos/test.mp4 --prompt 'xxx'

# run MV2V inpainting by providing bbox
python vace/vace_pipeline.py --base wan --task inpainting --mode bbox --bbox 50,50,550,700 --video assets/videos/test.mp4 --prompt 'xxx'

This script will run video preprocessing and model inference sequentially, and you need to specify all the required args of preprocessing (--task, --mode, --bbox, --video, etc.) and inference (--prompt, etc.). The output video together with intermediate video, mask and images will be saved into ./results/ by default.

💡Note: Please refer to run_vace_pipeline.sh for usage examples of different task pipelines.

2) Preprocessing

To have more flexible control over the input, before VACE model inference, user inputs need to be preprocessed into src_video, src_mask, and src_ref_images first. We assign each preprocessor a task name, so simply call vace_preprocess.py and specify the task name and task params. For example:

# process video depth
python vace/vace_preproccess.py --task depth --video assets/videos/test.mp4

# process video inpainting by providing bbox
python vace/vace_preproccess.py --task inpainting --mode bbox --bbox 50,50,550,700 --video assets/videos/test.mp4

The outputs will be saved to ./processed/ by default.

💡Note: Please refer to run_vace_pipeline.sh preprocessing methods for different tasks. Moreover, refer to vace/configs/ for all the pre-defined tasks and required params. You can also customize preprocessors by implementing at annotators and register them at configs.

3) Model inference

Using the input data obtained from Preprocessing, the model inference process can be performed as follows:

# For Wan2.1 single GPU inference (1.3B-480P)
python vace/vace_wan_inference.py --ckpt_dir <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"

# For Wan2.1 Multi GPU Acceleration inference (1.3B-480P)
pip install "xfuser>=0.4.1"
torchrun --nproc_per_node=8 vace/vace_wan_inference.py --dit_fsdp --t5_fsdp --ulysses_size 1 --ring_size 8 --ckpt_dir <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"

# For Wan2.1 Multi GPU Acceleration inference (14B-720P)
torchrun --nproc_per_node=8 vace/vace_wan_inference.py --dit_fsdp --t5_fsdp --ulysses_size 8 --ring_size 1 --size 720p --model_name 'vace-14B' --ckpt_dir <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-ma
View on GitHub
GitHub Stars3.7k
CategoryContent
Updated12h ago
Forks254

Languages

Python

Security Score

100/100

Audited on Apr 9, 2026

No findings