Author:
Koskinen Samu,Acar Erman,Kämäräinen Joni-Kristian
Abstract
AbstractColor constancy is still one of the biggest challenges in camera color processing. Convolutional neural networks have been able to improve the situation but there are still problems in many conditions, especially in scenes where a single color is dominating. In this work, we approach the problem from a slightly different setting. What if we could have some other information than the raw RGB image data. What kind of information would help to bring significant improvements while still be feasible in a mobile device. These questions sparked an idea for a novel approach for computational color constancy. Instead of raw RGB images used by the existing algorithms to estimate the scene white points, our approach is based on the scene’s average color spectra-single pixel spectral measurement. We show that as few as 10–14 spectral channels are sufficient. Notably, the sensor output has five orders of magnitude less data than in raw RGB images of a 10MPix camera. The spectral sensor captures the “spectral fingerprints” of different light sources and the illuminant white point can be accurately estimated by a standard regressor. The regressor can be trained with generated measurements using the existing RGB color constancy datasets. For this purpose, we propose a spectral data generation pipeline that can be used if the dataset camera model is known and thus its spectral characterization can be obtained. To verify the results with real data, we collected a real spectral dataset with a commercial spectrometer. On all datasets the proposed Single Pixel Spectral Color Constancy obtains the highest accuracy in the both single and cross-dataset experiments. The method is particularly effective for the difficult scenes for which the average improvements are 40–70% compared to state-of-the-arts. The approach can be extended to multi-illuminant case for which the experimental results also provide promising results.
Funder
Tampere University including Tampere University Hospital, Tampere University of Applied Sciences
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Reference50 articles.
1. Arad, B., & Ben-Shahar, O. (2016). High-resolution hyperspectral imaging via matrix factorization. ECCV. https://doi.org/10.1109/CVPR.2011.5995457
2. Aytekin, Ç., Nikkanen, J., & Gabbouj, M. (2017). INTEL-TUT dataset for camera invariant color constancy research. CoRR, abs/1703.09778, doi10.1109/TIP.2017.2764264.
3. Banić, N. & Lončarić, S. (2017). Unsupervised learning for color constancy. CoRR, doi10.48550/ arXiv:1712.00436.
4. Barnard, K., Cardei, V., & Funt, B. (2002). A comparison of computational color constancy algorithms-part I: Methodology and experiments with synthesized data. IEEE Signal Processing Society. https://doi.org/10.1109/TIP.2002.802531
5. Barron, J. T. (2015). Convolutional color constancy. ICCV. https://doi.org/10.1109/ICCV.2015.51
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Nighttime color constancy using robust gray pixels;Journal of the Optical Society of America A;2024-02-20