SkillAgentSearch skills...

SIM

Inference-only implementation of "One-Step Diffusion Distillation through Score Implicit Matching" [NIPS 2024]

Install / Use

/learn @maple-research-lab/SIM
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<!-- <p align="center"> <img src="./assets/lumina-logo.png" width="40%"/> <br> </p> -->

One-Step Diffusion Distillation through Score Implicit Matching

<div align="center">

Static Badge  weixin

SIM  Static Badge 

<!--[![Static Badge](https://img.shields.io/badge/-Project%20Page-orange?logo=healthiness&logoColor=1D9BF0)](https://maple-aigc.github.io/SIM)&#160; --> </div>

intro_large

Overview

This repository contains inference-only code for our work, SIM, a cutting-edge approach for distilling pre-trained diffusion models into efficient one-step generators. Unlike traditional models that require multiple sampling steps, SIM achieves high-quality sample generation without needing training samples for distillation. It effectively computes gradients for various score-based divergences, resulting in impressive performance metrics: an FID of 2.06 for unconditional generation and 1.96 for class-conditional generation on the CIFAR10 dataset. Additionally, SIM has been applied to a state-of-the-art transformer-based diffusion model for text-to-image generation, achieving an aesthetic score of 6.42 and outperforming existing one-step generators.

Released Models

We released our model that has been trained for more steps with better generation quality. please visit https://huggingface.co/maple-research-lab/SIM for checkpoint.

Inference

python inference.py \
--dit_model_path "/path/to/our_model" \
--text_enc_path /path/to/PixArt-alpha/t5-v1_1-xxl \
--vae_path /path/to/PixArt-alpha/sd-vae-ft-ema \
--prompt "a colorful painting of a beautiful landscape" \
--output_dir out-0 \
--batch 4 --seed 112 --dtype bf16 --device cuda --init_sigma 2.5

License

One-Step Diffusion Distillation through Score Implicit Matching is released under Affero General Public License v3.0

Acknowledgements

Zhengyang Geng is supported by funding from the Bosch Center for AI. Zico Kolter gratefully acknowledges Bosch’s funding for the lab.

We also acknowledge the authors of Diff-Instruct and Score-identity Distillation for their great contributions to high-quality diffusion distillation Python code. We appreciate the authors of PixelArt- α for making their DiT-based diffusion model public.

Collaboration

For inquiries regarding accessing our latest models or collaboration, please contact Guo-jun Qi: guojunq at gmail dot com

📄 Citation

@article{luo2024one,
  title={One-Step Diffusion Distillation through Score Implicit Matching},
  author={Luo, Weijian and Huang, Zemin and Geng, Zhengyang and Kolter, J Zico and Qi, Guo-jun},
  journal={arXiv preprint arXiv:2410.16794},
  year={2024}
}
View on GitHub
GitHub Stars83
CategoryDevelopment
Updated25d ago
Forks3

Languages

Python

Security Score

100/100

Audited on Mar 3, 2026

No findings