SkillAgentSearch skills...

DeepCache

[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free

Install / Use

/learn @horseee/DeepCache
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

DeepCache: Accelerating Diffusion Models for Free

<div align="center"> <img src="https://github.com/horseee/Diffusion_DeepCache/blob/master/static/images/example_compress.gif" width="100%" ></img> <br> <em> (Results on Stable Diffusion v1.5. Left: 50 PLMS steps. Right: 2.3x acceleration upon 50 PLMS steps) </em> </div> <br>

DeepCache: Accelerating Diffusion Models for Free
Xinyin Ma, Gongfan Fang, Xinchao Wang
Learning and Vision Lab, National University of Singapore
🥯[Arxiv]🎄[Project Page]

Why DeepCache

  • 🚀 Training-free and almost lossless
  • 🚀 Support Stable Diffusion, Stable Diffusion XL, Stable Video Diffusion, Stable Diffusion Pipeline / XL Pipeline for Inpainting, Stable Diffusion Img2Img Pipeline, DDPM
  • 🚀 Compatible with sampling algorithms like DDIM and PLMS

Updates

  • June 27, 2024: 🔥Our new work AsyncDiff enables parallel inference of diffusion models on multiple GPUs. Check our paper and code!
  • June 5, 2024: 🔥Our new work, Learning-to-Cache, an improved version of DeepCache on DiT. Code and checkpoints are released.
  • January 5, 2024: 💥A doc page for DeepCache has been added in Diffusers! Check here for more information. Many thanks to the Diffusers team!
  • December 26, 2023: 🔥Update a plug-and-play implementation of DeepCache, no longer requiring any modifications of the diffuser's code! Check here for the detailed usage! Big thanks to @yuanshi9815 for contributing the code!
  • December 25, 2023: A demo is available via Colab Open in Colab
  • December 21, 2023: Release the code for Stable Video Diffusion and Text2Video-Zero. In the figure below, the upper row shows the original videos generated by SVD-XT, and the lower row is accelerated by DeepCache. For Text2Video-Zero, the results can be found here
<div align="center"> <img src="assets/svd.gif" width="90%" ></img> <br> <em> (1.7x acceleration of SVD-XT) </em> </div>
  • December 20, 2023: Release the code for DDPM. See here for the experimental code and instructions.

  • December 6, 2023: Release the code for Stable Diffusion XL. The results of the stabilityai/stable-diffusion-xl-base-1.0 are shown in the below figure, with the same prompts from the first figure.

<div align="center"> <img src="assets/sdxl.png" width="90%" ></img> <br> <em> (2.6x acceleration of Stable Diffusion XL) </em> </div>

Introduction

We introduce DeepCache, a novel training-free and almost lossless paradigm that accelerates diffusion models from the perspective of model architecture. Utilizing the property of the U-Net, we reuse the high-level features while updating the low-level features in a very cheap way. DeepCache accelerates Stable Diffusion v1.5 by 2.3x with only a 0.05 decline in CLIP Score, and LDM-4-G(ImageNet) by 4.1x with a 0.22 decrease in FID.

<div align="center"> <img width="50%" alt="image" src="https://github.com/horseee/DeepCache/assets/18592211/9ce3930c-c84c-4af8-8c6a-b6803a5a7b1d"> </div>

Quick Start

Install

pip install DeepCache

Usage

import torch

# Loading the original pipeline
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda:0")

# Import the DeepCacheSDHelper
from DeepCache import DeepCacheSDHelper
helper = DeepCacheSDHelper(pipe=pipe)
helper.set_params(
    cache_interval=3,
    cache_branch_id=0,
)
helper.enable()

# Generate Image
deepcache_image = pipe(
        prompt,
        output_type='pt'
).images[0]
helper.disable()

We here take the Stable Diffusion pipeline as an example. You can replace pipe with any variants of the Stable Diffusion pipeline, including choices like SDXL, SVD, and more. You can find examples in the script. The argument cache_branch_id specifies the selected skip branch. For the skip branches that are deeper, the model will engage them only during the caching steps, and exclude them during the retrieval steps. The argument cache_interval represents the interval for updating the cache.

A general script for SD

python main.py --model_type sdxl #Support [sdxl, sd1.5, sd2.1, svd, sd-inpaint, sdxl-inpaint, sd-img2img]

Experimental code for DeepCache

The above implementation does not require changes to the forward or __call__ functions in the Diffusers pipeline, and is, therefore, more general. The following section is the experimental code that can be used to reproduce the results in the paper. It is implemented one by one for different model structures and pipelines, and thus, may not work properly due to the update of diffusers.

Setup

pip install diffusers==0.24.0 transformers

Stable Diffusion XL

python stable_diffusion_xl.py --model stabilityai/stable-diffusion-xl-base-1.0
<details> <summary>Output:</summary>
Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00,  6.62it/s]
2023-12-06 01:44:28,578 - INFO - Running baseline...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:17<00:00,  2.93it/s]
2023-12-06 01:44:46,095 - INFO - Baseline: 17.52 seconds
Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00,  8.06it/s]
2023-12-06 01:45:02,865 - INFO - Running DeepCache...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:06<00:00,  8.01it/s]
2023-12-06 01:45:09,573 - INFO - DeepCache: 6.71 seconds
2023-12-06 01:45:10,678 - INFO - Saved to output.png. Done!
</details>

You can add --refine at the end of the command to activate the refiner model for SDXL.

Stable Diffusion v1.5

python stable_diffusion.py --model runwayml/stable-diffusion-v1-5
<details> <summary>Output:</summary>
2023-12-03 16:18:13,636 - INFO - Loaded safety_checker as StableDiffusionSafetyChecker from `safety_checker` subfolder of runwayml/stable-diffusion-v1-5.
2023-12-03 16:18:13,699 - INFO - Loaded vae as AutoencoderKL from `vae` subfolder of runwayml/stable-diffusion-v1-5.
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00,  5.88it/s]
2023-12-03 16:18:22,837 - INFO - Running baseline...
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:03<00:00, 15.33it/s]
2023-12-03 16:18:26,174 - INFO - Baseline: 3.34 seconds
2023-12-03 16:18:26,174 - INFO - Running DeepCache...
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:01<00:00, 34.06it/s]
2023-12-03 16:18:27,718 - INFO - DeepCache: 1.54 seconds
2023-12-03 16:18:27,935 - INFO - Saved to output.png. Done!
</details>

Stable Diffusion v2.1

python stable_diffusion.py --model stabilityai/stable-diffusion-2-1
<details> <summary>Output:</summary>
2023-12-03 16:21:17,858 - INFO - Loaded feature_extractor as CLIPImageProcessor from `feature_extractor` subfolder of stabilityai/stable-diffusion-2-1.
2023-12-03 16:21:17,864 - INFO - Loaded scheduler as DDIMScheduler from `scheduler` subfolder of stabilityai/stable-diffusion-2-1.
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████| 6/6 [00:01<00:00,  5.35it/s]
2023-12-03 16:21:49,770 - INFO - Running baseline...
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:14<00:00,  3.42it/s]
2023-12-03 16:22:04,551 - INFO - Baseline: 14.78 seconds
2023-12-03 16:22:04,551 - INFO - Running DeepCache...
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:08<00:00,  6.10it/s]
2023-12-03 16:22:12,911 - INFO - DeepCache: 8.36 seconds
2023-12-03 16:22:13,417 - INFO - Saved to output.png. Done!
</details>

Currently, our code supports the models that can be loaded by StableDiffusionPipeline. You can specify the model name by the argument --model, which by default, is runwayml/stable-diffusion-v1-5.

Stable Video Diffusion

python stable_video_diffusion.py
<details> <summary>Output:</summary>
Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [0
View on GitHub
GitHub Stars963
CategoryDevelopment
Updated3d ago
Forks52

Languages

Python

Security Score

100/100

Audited on Apr 4, 2026

No findings