SICE
Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images (TIP 2018)
Install / Use
/learn @csjcai/SICEREADME
Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images
Abstract
Due to the poor lighting condition and limited dynamic range of digital imaging devices, the recorded images are often under-/over-exposed and with low contrast. Most of previous single image contrast enhancement (SICE) methods adjust the tone curve to correct the contrast of an input image. Those methods, however, often fail in revealing image details because of the limited information in a single image. On the other hand, the SICE task can be better accomplished if we can learn extra information from appropriately collected training data. In this work, we propose to use the convolutional neural network (CNN) to train a SICE enhancer. One key issue is how to construct a training dataset of low-contrast and high-contrast image pairs for end-to-end CNN learning. To this end, we build a large-scale multi-exposure image dataset, which contains 589 elaborately selected high-resolution multi-exposure sequences with 4,413 images. Thirteen representative multi-exposure image fusion and stack-based high dynamic range imaging algorithms are employed to generate the contrast enhanced images for each sequence, and subjective experiments are conducted to screen the best quality one as the reference image of each scene. With the constructed dataset, a CNN can be easily trained as the SICE enhancer to improve the contrast of an under-/over-exposure image. Experimental results demonstrate the advantages of our method over existing SICE methods with a significant margin.
Code for training and testing
Trained Caffe model for the under-exposed image: *.caffemodel <br> Network structure: *.prototxt (to view the network structure, use this link) <br> Install and compile Caffe (the matlab interface is used) <br>
Model 1 (End-to-end residual learning)
Run the Demo_Test.m for the result

Model 2 (Twostage Network)
Run the Demo_Test.m for the result

Model 3 (Twostage perpixel convolution)
Run the Demo_Test.m for the result

Dataset
Please refer to:
- Google Drive: Part1: 360 Image Sequences, Part2: 229 Image Sequences
or
- BaiduYun: Part1: 360 Image Sequences, Part2: 229 Image Sequences Data, Part2: 229 Image Sequences Label
Requirements and Dependencies
Caffe
New Layers With CPU and GPU Implementations
caffe.proto (Parameters for SSIM and Regularization Layer)
Usage
layer {
name: "SSIMLossLayer"
type: "SSIMLoss"
bottom: "output"
bottom: "label"
top: "SSIMLoss"
ssim_loss_param{
kernel_size: 8
stride: 8
c1: 0.0001
c2: 0.001
}
}
Citation
@article{Cai2018deep,
title={Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images},
author={Cai, Jianrui and Gu, Shuhang and Zhang, Lei},
journal={IEEE Transactions on Image Processing},
volume={27},
number={4},
pages={2049-2062},
year={2018},
publisher={IEEE}
}
Related Skills
proje
Interactive vocabulary learning platform with smart flashcards and spaced repetition for effective language acquisition.
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
401Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
