FasterCache
[ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality
Install / Use
/learn @Vchitect/FasterCacheREADME
About
We present FasterCache, a novel training-free strategy designed to accelerate the inference of video diffusion models with high-quality generation. For more details and visual results, go checkout our Project Page.
https://github.com/user-attachments/assets/035c50c2-7b74-4755-ac1e-e5aa1cffba2a
News
- (🔥 New) 2024/11/8 We support the multi-device inference script for CogvideoX
- (🔥 New) 2024/11/8 We implemented FasterCache based on the Mochi
Usage
Installation
Run the following instructions to create an Anaconda environment.
conda create -n fastercache python=3.10 -y
conda activate fastercache
git clone https://github.com/Vchitect/FasterCache
cd FasterCache
pip install -e .
Inference
We currently support Open-Sora 1.2, Open-Sora-Plan 1.1, Latte, CogvideoX-2B&5B, Vchitect 2.0 and Mochi. You can achieve accelerated sampling by executing the scripts we provide.
-
Open-Sora
For single-GPU inference on Open-Sora, run the following command:
bash scripts/opensora/fastercache_sample_opensora.shFor multi-GPU inference on Open-Sora, run the following command:
bash scripts/opensora/fastercache_sample_multi_device_opensora.sh -
Open-Sora-Plan
For single-GPU inference on Open-Sora-Plan, run the following command:
bash scripts/opensora_plan/fastercache_sample_opensoraplan.shFor multi-GPU inference on Open-Sora-Plan, run the following command:
bash scripts/opensora_plan/fastercache_sample_multi_device_opensoraplan.sh -
Latte
For single-GPU inference on Latte, run the following command:
bash scripts/latte/fastercache_sample_latte.shFor multi-GPU inference on Latte, run the following command:
bash scripts/latte/fastercache_sample_multi_device_latte.sh -
CogVideoX
For single-GPU or multi-GPU batched inference on CogVideoX-2B, run the following command:
bash scripts/cogvideox/fastercache_sample_cogvideox.shFor multi-GPU inference on CogVideoX-2B, run the following command:
bash scripts/cogvideox/fastercache_sample_cogvideox_multi_device.shFor inference on CogVideoX-5B, run the following command:
bash scripts/cogvideox/fastercache_sample_cogvideox5b.sh -
Vchitect 2.0
For inference on Vchitect 2.0, run the following command:
bash scripts/vchitect/fastercache_sample_vchitect.sh
-
Mochi
We also provide acceleration scripts for Mochi. Before running these scripts, please follow the official Mochi repository to complete model downloads, environment setup, and installation of genmo. Then, execute the following script:
bash scripts/mochi/fastercache_sample_mochi.sh
BibTeX
@inproceedings{lv2024fastercache,
title={FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality},
author={Lv, Zhengyao and Si, Chenyang and Song, Junhao and Yang, Zhenyu and Qiao, Yu and Liu, Ziwei and Kwan-Yee K. Wong},
booktitle={arxiv},
year={2024}
}
Acknowledgement
This repository borrows code from VideoSys, Vchitect-2.0, Mochi, and CogVideo,.Thanks for their contributions!
Related Skills
qqbot-channel
351.2kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
100.5k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
351.2kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Design
Campus Second-Hand Trading Platform \- General Design Document (v5.0 \- React Architecture \- Complete Final Version)1\. System Overall Design 1.1. Project Overview This project aims t
