TADSR
This is the official PyTorch codes for the paper: "Time-Aware One Step Diffusion Network for Real-World Image Super-Resolution"
Install / Use
/learn @zty557/TADSRREADME
🚩 Accepted by CVPR2026
<a href='https://arxiv.org/abs/2508.16557'><img src='https://img.shields.io/badge/Paper-arxiv-b31b1b.svg'></a> <a href='https://zty557.github.io/TADSR_HomePage/'><img src='https://img.shields.io/badge/Project page-TADSR-1bb41b.svg'></a> <a href=''><img src='https://img.shields.io/badge/Space-huggingface-ffd700.svg'></a>
</div>This is the official PyTorch codes for the paper
Time-Aware One Step Diffusion Network for Real-World Image Super-Resolution<br> Tianyi Zhang<sup>1</sup>, Zhengpeng Duan<sup>1</sup>, Peng-Tao Jiang<sup>2</sup>, Bo Li Fu<sup>2</sup>, MingMing Cheng<sup>1</sup>, Chunle Guo<sup>1,3,†</sup>, Chongyi Li<sup>1,3</sup> <br> <sup>1</sup> VCIP, CS, Nankai University, <sup>2</sup> vivo Mobile Communication Co. Ltd. , <sup>3</sup> NKIARI, Shenzhen Futian<br> <sup>†</sup>Corresponding author.

:star: If TADSR is helpful to your images or projects, please help star this repo. Thank you! :point_left:
:boom: News
- 2025.08.25 Create this repo.
:runner: TODO
- [x] Release training and inference code
- [x] Release Checkpoints
:wrench: Dependencies and Installation
- Clone repo
git clone https://github.com/zty557/TADSR.git
cd TADSR
- Install packages
conda create -n tadsr python==3.10 -y
conda activate tadsr
pip install -r requirements.txt
:surfer: Quick Inference
Step 1: Download Checkpoints
Download the [TADSR] checkpoints and place them in the directories preset/weights.
Step 2: Prepare testing data
Place low-quality images in preset/datasets/test_datasets/.
You can download RealSR, DrealSR and RealLR200 from [SeeSR],
Thanks for their awesome works.
Step 3: Running testing command
bash scripts/test_tadsr.sh
Replace the [image_path] and [output_dir] with their respective paths before running the command.
Step 4: Check the results
The processed results will be saved in the [output_dir] directory.
:muscle: Train
Step 1: Prepare the training data
- Download the training datasets
LSDIR. - Following [SeeSR], you can generate the LR-HR pairs for training using.
- Using
bash_data/get_tag.shto get the paths of each HR-LR pair and their corresponding prompts, and you will receive adataset_list.txtfile in the following format.
LSDIR/HR_image/0000001.png LSDIR/LR_image/0000001.png "tag prompt of 0000001.png"
LSDIR/HR_image/0000002.png LSDIR/LR_image/0000002.png "tag prompt of 0000002.png"
LSDIR/HR_image/0000003.png LSDIR/LR_image/0000003.png "tag prompt of 0000003.png"
...
Step 2: Start train
Use the following command to start the training process:
bash scripts/train_tadsr.sh
Replace the [txt_path] with the path to the dataset_list.txt file generated by your dataset.
📜 License
This project is licensed under the Pi-Lab License 1.0 - see the LICENSE file for details.
:book: Citation
If you find our repo useful for your research, please consider citing our paper:
@misc{zhang2025timeawarestepdiffusionnetwork,
title={Time-Aware One Step Diffusion Network for Real-World Image Super-Resolution},
author={Tainyi Zhang and Zheng-Peng Duan and Peng-Tao Jiang and Bo Li and Ming-Ming Cheng and Chun-Le Guo and Chongyi Li},
year={2025},
eprint={2508.16557},
archivePrefix={arXiv},
primaryClass={eess.IV},
url={https://arxiv.org/abs/2508.16557},
}
:postbox: Contact
For technical questions, please contact zty557@gmail.com
