SODEC
[AAAI'26] Steering One-Step Diffusion Model with Fidelity-Rich Decoder for Fast Image Compression
Install / Use
/learn @zhengchen1999/SODECREADME
Steering One-Step Diffusion Model with Fidelity-Rich Decoder for Fast Image Compression
Zheng Chen, Mingde Zhou, Jinpei Guo, Jiale Yuan, Ji Yifei, and Yulun Zhang, "Steering One-Step Diffusion Model with Fidelity-Rich Decoder for Fast Image Compression", AAAI, 2026
<div> <a href="https://github.com/zhengchen1999/SODEC/releases" target='_blank' style="text-decoration: none;"><img src="https://img.shields.io/github/downloads/zhengchen1999/SODEC/total?color=green&style=flat"></a> <a href="https://github.com/zhengchen1999/SODEC" target='_blank' style="text-decoration: none;"><img src="https://visitor-badge.laobi.icu/badge?page_id=zhengchen1999/SODEC"></a> <a href="https://github.com/zhengchen1999/SODEC" target='_blank' style="text-decoration: none;"><img src="https://img.shields.io/github/stars/zhengchen1999/SODEC?style=social"></a> </div>[project] [arXiv] [supplementary material] [dataset] [pretrained models]
🔥🔥🔥 News
- 2024-11-08: SODEC is accepted at AAAI 2026. 🎉🎉🎉
- 2025-8-07: This repo is released.
Abstract: Diffusion-based image compression has demonstrated impressive perceptual performance. However, it suffers from two critical drawbacks: (1) excessive decoding latency due to multi-step sampling, and (2) poor fidelity resulting from over-reliance on generative priors. To address these issues, we propose SODEC, a novel single-step diffusion image compression model. We argue that in image compression, a sufficiently informative latent renders multi-step refinement unnecessary. Based on this insight, we leverage a pre-trained VAE-based model to produce latents with rich information, and replace the iterative denoising process with a single-step decoding. Meanwhile, to improve fidelity, we introduce the fidelity guidance module, encouraging outputs that are faithful to the original image. Furthermore, we design the rate annealing training strategy to enable effective training under extremely low bitrates. Extensive experiments show that SODEC significantly outperforms existing methods, achieving superior rate–distortion–perception performance. Moreover, compared to previous diffusion-based compression models, SODEC improves decoding speed by more than 20×.
Pipeline

Performance
<img src="figs/Performance.png">🔖 TODO
- [ ] Release testing and training code.
- [ ] Release pre-trained models.
- [ ] Provide WebUI.
- [ ] Provide HuggingFace demo.
🔗 Contents
- Datasets
- Models
- Training
- Testing
- Results
- Acknowledgements
<a name="results"></a>🔎 Results
We achieve impressive performance on image compression tasks.
<details open> <summary>Quantitative Results (click to expand)</summary>- Results in Fig. 4 of the main paper
- Results in Fig. 5 of the main paper
- Rate-Distortion-Perception Results (Fig. 4 of the supplementary material)
- Visual Comparison Results (Fig. 5 of the supplementary material)
- Extended Qualitative Results (Fig. 6 of the supplementary material)
- Additional Results on DIV2K-val (Fig. 7 of the supplementary material)
- Additional Results on Kodak (Fig. 7 of the supplementary material)
<a name="citation"></a>📎 Citation
If you find the code helpful in your research or work, please cite the following paper(s).
@inproceedings{chen2026steering,
title={Steering One-Step Diffusion Model with Fidelity-Rich Decoder for Fast Image Compression},
author={Chen, Zheng and Zhou, Mingde and Guo, Jinpei and Yuan, Jiale and Ji, Yifei and Zhang, Yulun},
booktitle={AAAI},
year={2026}
}
<a name="acknowledgements"></a>💡 Acknowledgements
Related Skills
node-connect
332.9kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
81.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
332.9kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
81.9kCommit, push, and open a PR
Security Score
Audited on Feb 9, 2026

