10 skills found
mahmoudnafifi / C5Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)
howardyclo / CLCC CVPR21An official TensorFlow implementation of “CLCC: Contrastive Learning for Color Constancy” accepted at CVPR 2021.
mahmoudnafifi / Semantic Color Constancy Using CNNSemantic information can help CNNs to get better illuminant estimation -- a proof of concept
mahmoudnafifi / SIIESensor-Independent Illumination Estimation for DNN Models (BMVC 2019)
swift-n-brutal / Illuminant EstimationDeep Specialized Network for Illuminant Estimation
Shaobinggao / Multi Illuminant Based Color ConstancyCombining bottom-up and top-down visual mechanisms for color constancy under varying illumination. This repository contains the datasets and codes published for color constancy under varying illmunations. -----------COPYRIGHT NOTICE STARTS WITH THIS LINE------------ Copyright (c) 2019 All rights reserved. This doucuments are a rough version for summarizing the results and codes in publication [1], which is available only for research purpose. We preserve the rights to further correct and update the data. This dataset contains three datasets for color constancy under varying illuminations, which are used in publication [1]. real-world dataset with multi-illuminant: the real-world dataset contains 37 images captured under vairous non-uniform light sources. synthetic dataset with multi-illuminant: the dataset with the synthetic multiple illuminants contains 100 images. MCC-BU+TD: This dataset contains results of multiple MCC algorithms on several real-world images taken from the web, which could be easily used and compared in any research publications. More information please refer to readme.txt in each folder. If you use this dataset for the evaluation of your approach and producing the results, please cite our work as follows: [1] S. Gao, Y. Ren, M. Zhang and Y. Li, "Combining bottom-up and top-down visual mechanisms for color constancy under varying illumination," in IEEE Transactions on Image Processing. doi: 10.1109/TIP.2019.2908783 [2] X.-S. Zhang, S.-B. Gao, R.-X. Li, X.-Y. Du, C.-Y. Li, and Y.-J. Li, “A retinal mechanism inspired color constancy model,” IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1219–1232, 2016. [3] K.-F. Yang, S.-B. Gao, Y.-J. Li, and Y. Li, “Efficient illuminant estimation for color constancy using grey pixels,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 2254–2263. [4] Gao, S. B., Yang, K. F., Li, C. Y., & Li, Y. J. (2015). Color constancy using double-opponency. IEEE transactions on pattern analysis and machine intelligence, 37(10), 1973-1985. Any questions and comments are welcome to gaoshaobing@scu.edu.cn
shuwei666 / Robust Pixel Wise Illuminant EstimationRelease code for the paper "Robust pixel-wise illuminant estimation algorithm for images with a low bit-depth"(2024 Optics Express)
mahmoudnafifi / APAP Bias Correction For Illumination Estimation MethodsBias correction method for illuminant estimation -- JOSA 2019
jackygsb / Efficient Illuminant Estimation For Color Constancy Using Grey PixelsNo description available
MarcoBauzz / Color Constancy FrameworkA Convolutional Framework for Color Constancy [IEEE TNNLS 2024]