SkillAgentSearch skills...

Retinexformer

"Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement" (ICCV 2023) & (NTIRE 2024 Runner-Up)

Install / Use

/learn @caiyuanhao1998/Retinexformer

README

 

<div align="center"> <p align="center"> <img src="figure/logo.png" width="200px"> </p>

arXiv NTIRE zhihu

 

</div>

Introduction

This is a baseline and toolbox for wide-range low-light image enhancement. This repo supports over 15 benchmarks and extremely high-resolution (up to 4000x6000) low-light enhancement. Our method Retinexformer won the second place in the NTIRE 2024 Challenge on Low Light Enhancement. If you find this repo useful, please give it a star ⭐ and consider citing our paper in your research. Thank you.

Awards

<img src="./figure/ntire.png" height=240> <img src="./figure/NTIRE_2024_award.png" height=240>

News

  • 2025.02.10 : NTIRE 2025 Low-light Image Enhancement Challenge has started. Welcome to use our Retinexformer and MST to participate in this challenge. 😄
  • 2024.09.15 : An enhanced version of Retinexformer (ECCV 2024) has been released at this repo. Feel free to check and use it. 🤗
  • 2024.08.07 : We share the code that can draw our teaser figure (the bar comparison) here. Feel free to use it in your research :smile:
  • 2024.07.03 : We share more results of compared baseline methods to help your research. Feel free to download them from Google Drive or Baidu Disk :smile:
  • 2024.07.01 : An enhanced version of Retinexformer has been accepted by ECCV 2024. Code will be released. Stay tuned. 🚀
  • 2024.05.12 : RetinexMamba based on our Retinexformer framework and this repo has been released. The first Mamba work on low-light enhancement. Thanks to the efforts of the authors.
  • 2024.03.22 : We release distributed data parallel (DDP) and mix-precision training strategies to help you train larger models. We release self-ensemble testing strategy to help you derive better results. In addition, we also release an adaptive split-and-test testing strategy for high-resolution up to 4000x6000 low-light image enhancement. Feel free to use them. 🚀
  • 2024.03.21 : Our methods Retinexformer and MST++ (NTIRE 2022 Spectral Reconstruction Challenge Winner) ranked top-2 in the NTIRE 2024 Challenge on Low Light Enhancement. Code, pre-trained weights, training logs, and enhancement results have been released in this repo. Feel free to use them! 🚀
  • 2024.02.15 : NTIRE 2024 Challenge on Low Light Enhancement begins. Welcome to use our Retinexformer or MST++ (NTIRE 2022 Spectral Reconstruction Challenge Winner) to participate in this challenge! :trophy:
  • 2023.11.03 : The test setting of KinD, LLFlow, and recent diffusion models and the corresponding results on LOL are provided. Please note that we do not suggest this test setting because it uses the mean of the ground truth to obtain better results. But, if you want to follow KinD, LLFlow, and recent diffusion-based works for fair comparison, it is your choice to use this test setting. Please refer to the Testing part for details.
  • 2023.11.02 : Retinexformer is added to the Awesome-Transformer-Attention collection. 💫
  • 2023.10.20 : Params and FLOPS evaluating function is provided. Feel free to check and use it.
  • 2023.10.12 : Retinexformer is added to the ICCV-2023-paper collection. 🚀
  • 2023.10.10 : Retinexformer is added to the low-level-vision-paper-record collection. ⭐
  • 2023.10.06 : Retinexformer is added to the awesome-low-light-image-enhancement collection. :tada:
  • 2023.09.20 : Some results on ExDark nighttime object detection are released.
  • 2023.09.20 : Code, models, results, and training logs have been released. Feel free to use them. ⭐
  • 2023.07.14 : Our paper has been accepted by ICCV 2023. Code and Models will be released. :rocket:

Results

  • Results on LOL-v1, LOL-v2-real, LOL-v2-synthetic, SID, SMID, SDSD-in, SDSD-out, and MIT Adobe FiveK datasets can be downloaded from Baidu Disk (code: cyh2) or Google Drive

  • Results on LOL-v1, LOL-v2-real, and LOL-v2-synthetic datasets with the same test setting as KinD, LLFlow, and recent diffusion models can be downloaded from Baidu Disk (code: cyh2) or Google Drive.

  • Results on the NTIRE 2024 low-light enhancement dataset can be downloaded from Baidu Disk (code: cyh2) or Google Drive

  • Results on LIME, NPE, MEF, DICM, and VV datasets can be downloaded from Baidu Disk (code: cyh2) or Google Drive

  • Results on ExDark nighttime object detection can be downloaded from Baidu Disk (code: cyh2) or Google Drive. Please use this repo to run experiments on the ExDark dataset

  • Results of some compared baseline methods are shared on Google Drive and Baidu Disk

<details close> <summary><b>Performance on LOL-v1, LOL-v2-real, LOL-v2-synthetic, SID, SMID, SDSD-in, and SDSD-out:</b></summary>

results1

</details> <details close> <summary><b>Performance on LOL with the same test setting as KinD, LLFlow, and diffusion models:</b></summary>

| Metric | LOL-v1 | LOL-v2-real | LOL-v2-synthetic | | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | | PSNR | 27.18 | 27.71 | 29.04 | | SSIM | 0.850 | 0.856 | 0.939 |

Please note that we do not suggest this test setting because it uses the mean of the ground truth to obtain better results. But, if you want to follow KinD, LLFlow, and recent diffusion-based works, it is your choice to use this test setting. Please refer to the Testing part for details.

</details> </details> <details close> <summary><b>Performance on NTIRE 2024 test-challenge:</b></summary>

| Method | Retinexformer | MST++ | Ensemble | | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | | PSNR | 24.61 | 24.59 | 25.30 | | SSIM | 0.85 | 0.85 | 0.85 |

Feel free to check the Codalab leaderboard. Our method ranks second.

results_ntire

</details> <details close> <summary><b>Performance on MIT Adobe FiveK:</b></summary>

results2

</details> <details close> <summary><b>Performance on LIME, NPE, MEF, DICM, and VV:</b></summary>

results3

</details> <details close> <summary><b>Performance on ExDark Nighttime object detection:</b></summary>

results4

</details>

Gallery

| NTIRE - dev - 2000x3000 | NTIRE - challenge - 4000x6000 | | :----------------------------------------------------------: | :----------------------------------------------------------: | | <img src="/figure/ntire_dev.png" height="250px"/> | <img src="/figure/ntire_challenge.png" height="250px"/> |

 

1. Create Environment

We suggest you use pytorch 1.11 to re-implement the results in our ICCV 2023 paper and pytorch 2 to re-implement the results in NTIRE 2024 Challenge because pytorch 2 can save more memory in mix-precision training.

1.1 Install the environment with Pytorch 1.11

  • Make Conda Environment
conda create -n Retinexformer python=3.7
conda activate Retinexformer
  • Install Dependencies
conda install pytorch=1.11 torchvision cudatoolkit=11.3 -c pytorch

pip i

Related Skills

View on GitHub
GitHub Stars1.4k
CategoryDevelopment
Updated17h ago
Forks113

Languages

Python

Security Score

100/100

Audited on Mar 29, 2026

No findings