SkillAgentSearch skills...

LED

[ICCV 2023] Lighting Every Darkness in Two Pairs: A Calibration-Free Pipeline for RAW Denoising && [Arxiv 2023] Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the Noise Model && [NeurIPS 2025] UltraLED: Learning to See Everything in Ultra-High Dynamic Range Scenes

Install / Use

/learn @Srameo/LED
About this skill

Quality Score

0/100

Category

Operations

Supported Platforms

Universal

README

<!-- # <div align="center"> Let's Prepare for <a href="https://mipi-challenge.org/MIPI2024/">MIPI@2024</a>! \[<a href="/tools/mipi_starting_kit">Starting-Kit</a>\] </div> --> <p align="center"> <img src='docs/led_ultra.png' alt='ICCV23_LED_LOGO' width='400px'/><br/> </p>

<div align="center"><a href="https://srameo.github.io/projects/led-extension/">Homepage</a> | <a href="https://arxiv.org/abs/2308.03448v2">Paper</a> | <a href="https://drive.google.com/drive/folders/11MYkjzbPIZ7mJbu9vrgaVC-OwGcOFKsM?usp=sharing">Google Drive</a> | <a href="https://pan.baidu.com/s/17rA_8GvfNPZJY5Zl9dyILw?pwd=iay5">Baidu Cloud</a> | <a href="https://zhuanlan.zhihu.com/p/648242095">知乎</a> | <a href="https://github.com/Srameo/LED/tree/main/tools/mipi_starting_kit">MIPI Starting-Kit</a>

<!-- <a href="https://github.com/Srameo/LED/files/12733867/iccv23_poster.pdf">Poster</a> | <a href="https://srameo.github.io/projects/led-iccv23/assets/slides/iccv23_slides_en.pdf">Slides</a> | <a href="https://youtu.be/Jo8OTAnUYkU">Video</a> </div> --> <div align="center">

:newspaper:News | :wrench:Install | :phone:README for UltraLED | :sparkles:Models Zoo | :camera:Quick Demo | :robot:Benchmark | :construction:Contribute | :scroll:License | :question:FAQ

</div> <!-- # :bulb: LED: Lighting Every Darkness in Two Pairs! -->

This repository contains the official implementation of the following papers:

UltraLED: Learning to See Everything in Ultra-High Dynamic Range Scenes<br/> Yuang Meng<sup>*</sup>, Xin Jin<sup>*</sup>, Lina Lei, Chunle Guo<sup>#</sup>, Chongyi Li<br/> (* denotes equal contribution. # denotes the corresponding author.)<br/> In NeurIPS 2025, [Paper Link]

Lighting Every Darkness in Two Pairs: A Calibration-Free Pipeline for RAW Denoising<br/> Xin Jin<sup>*</sup>, Jia-Wen Xiao<sup>*</sup>, Ling-Hao Han, Chunle Guo<sup>#</sup>, Ruixun Zhang, Xialei Liu, Chongyi Li<br/> (* denotes equal contribution. # denotes the corresponding author.)<br/> In ICCV 2023, [Paper Link], [Poster], [Slides], [Video]

Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the Noise Model<br/> Xin Jin, Jia-Wen Xiao, Ling-Hao Han, Chunle Guo<sup>#</sup>, Xialei Liu, Chongyi Li, Ming-Ming Cheng<sup>#</sup><br/> (# denotes corresponding authors.)<br/> arxiv preprint, [Paper Link]

<details> <summary>Comparaison with Calibration-Based Method</summary>

Some brief introduction on the process of calibration in [<a href='https://github.com/Srameo/LED/blob/main/docs/calib_en.md'>EN</a>/<a href='https://github.com/Srameo/LED/blob/main/docs/calib_cn.md'>CN</a>].

<img src='https://github.com/Srameo/LED/assets/51229295/022505b0-8ff0-445b-ab1f-bb79b48ecdbd' alt='ICCV23_LED_TEASER0' width='500px'/> </details>

LED is a Calibration-Free (or called implicit calibration) Pipeline for RAW Denoising (currently for extremely low-light conditions).

So tired of calibrating the noise model? Try our LED!<br/> Achieveing <b style='font-size: large'>SOTA performance</b> in <b style='font-size: large'>2 paired data</b> and <b style='font-size: large'>training time less than 4mins</b>!

<table> <tbody> <tr><td><img src='https://github.com/Srameo/LED/assets/51229295/5311798d-f988-48f7-b50e-7cd080d7316c' alt='ICCV23_LED_TEASER1'/> </td><td><img src='https://github.com/Srameo/LED/assets/51229295/3403a346-cd54-435c-b0b3-46b716863719' alt='ICCV23_LED_TEASER2'/></td></tr> <tr><td><details><summary>More Teaser</summary><img src='https://github.com/Srameo/LED/assets/51229295/0c737715-919a-49a9-a115-76935b74a5bb' alt='ICCV23_LED_TEASER3'/></details></td> <td><details><summary>More Teaser</summary><img src='https://github.com/Srameo/LED/assets/51229295/c3af68de-9e6d-47c9-8365-743be671ad77' alt='ICCV23_LED_TEASER4'/></details></td></tr> </tbody> </table>

:newspaper: News

Future work can be found in todo.md.

<ul> <li><b>Dec 18, 2025</b>: Release the training and testing code and the corresponding data for UltraLED.</li> <li><b>Jan 13, 2024</b>: Release the <a href="/tools/mipi_starting_kit">starting-kit</a> for <a href="https://mipi-challenge.org/MIPI2024/">MIPI@2024</a>. Additionally, we release the pre-trained parameters of Restormer and NAFNet.</li> <li><b>Dec 27, 2023</b>: Update an extension version of our ICCV 23 paper (<a href="https://srameo.github.io/projects/led-extension/">Project Page</a>/<a href="https://arxiv.org/abs/2308.03448v2">Paper</a>).</li> <li><b>Dec 1-5, 2023</b>: Add the related code/doc[<a href='https://github.com/Srameo/LED/blob/main/docs/calib_en.md'>EN</a>/<a href='https://github.com/Srameo/LED/blob/main/docs/calib_cn.md'>CN</a>] from <a href="https://github.com/Srameo/LED/pull/14">PR#14</a>/<a href="https://github.com/Srameo/LED/pull/16">PR#16</a>, thanks to @<a href="https://github.com/HYX20011209">HYX20011209</a></li> <li><b>Sep 27, 2023</b>: Add the urls to our <a href="https://github.com/Srameo/LED/files/12733867/iccv23_poster.pdf">Poster</a>, <a href="https://srameo.github.io/projects/led-iccv23/assets/slides/iccv23_slides_en.pdf">Slides</a>, and <a href="https://youtu.be/Jo8OTAnUYkU">Video</a>.</li> <li><b>Aug 19, 2023</b>: Release relevent files on <a href="https://pan.baidu.com/s/17rA_8GvfNPZJY5Zl9dyILw?pwd=iay5">Baidu Clould</a>(pwd: iay5).</li> </ul> <details> <summary>History</summary> <ul> <li><b>Aug 15, 2023</b>: For faster benchmark, we released the relevant files in commit <a href="https://github.com/Srameo/LED/commit/fadffc7282b02ab2fcc7fbade65f87217b642588"><code>fadffc7</code></a>.</li> <li><b>Aug, 2023</b>: We released a Chinese explanation of our paper on <a href="https://zhuanlan.zhihu.com/p/648242095">知乎</a>.</li> <li><b>Aug, 2023</b>: Our code is publicly available!</li> <li><b>July, 2023</b>: Our paper "Lighting Every Darkness in Two Pairs: A Calibration-Free Pipeline for RAW Denoising" has been accepted by ICCV 2023.</li> </ul> </details>

:wrench: Dependencies and Installation

  1. Clone and enter the repo:
    git clone https://github.com/Srameo/LED.git ICCV23-LED
    cd ICCV23-LED
    
  2. Simply run the install.sh for installation! Or refer to install.md for more details.

    We use the customized rawpy package in ELD, if you don't want to use it or want to know more information, please move to install.md

    bash install.sh
    
  3. Activate your env and start testing!
    conda activate LED-ICCV23
    

:phone: README for UltraLED

The code for training and evaluation of UltraLED, along with the corresponding data are in UltraLED.md.

:sparkles: Pretrained Models

If your requirement is for academic research and you would like to benchmark our method, please refer to pretrained-models.md, where we have a rich variety of models available across a diverse range of methods, training strategies, pre-training, and fine-tuning models.

We are currently dedicated to training an exceptionally capable network that can generalize well to various scenarios using <strong>only two data pairs</strong>! We will update this section once we achieve our goal. Stay tuned and look forward to it!<br/> Or you can just use the following pretrained LED module for custumizing on your own cameras! (please follow the instruction in Quick Demo).

<table> <thead> <tr> <th> Method </th> <th> Noise Model </th> <th> Phase </th> <th> Framework </th> <th> Training Strategy </th> <th> Additional Dgain (ratio) </th> <th> Camera Model </th> <th> Validation on </th> <th> :link: Download Links </th> <th> Config File </th> </tr> </thead> <tbody> <tr> <td>LED</td> <th> ELD (5 Virtual Cameras) </th> <th> Pretrain </th> <th> UNet </th> <th> PMN </th> <th> 100-300 </th> <th> - </th> <th> - </th> <th> [<a href="https://drive.google.com/file/d/1FSXp_vJxbo8_dbMJPiA33DZfagn1ExHA/view?usp=drive_link">Google Drive</a>] </th> <th> [<a href="/options/LED/pretrain/MM22_PMN_Setting.yaml">options/LED/pretrain/MM22_PMN_Setti
View on GitHub
GitHub Stars400
CategoryOperations
Updated17d ago
Forks39

Languages

Python

Security Score

85/100

Audited on Mar 12, 2026

No findings