Affiliation:
1. School of Information Engineering, Nanchang University, Nanchang 330031, China
2. School of Mathematics and Computer Sciences, Nanchang University, Nanchang 330031, China
Abstract
Over the past decade, significant advancements have been made in low-light image enhancement (LLIE) methods due to the robust capabilities of deep learning in non-linear mapping, feature extraction, and representation. However, the pursuit of a universally superior method that consistently outperforms others across diverse scenarios remains challenging. This challenge primarily arises from the inherent data bias in deep learning-based approaches, stemming from disparities in image statistical distributions between training and testing datasets. To tackle this problem, we propose an unsupervised weight map generation network aimed at effectively integrating pre-enhanced images generated from carefully selected complementary LLIE methods. Our ultimate goal is to enhance the overall enhancement performance by leveraging these pre-enhanced images, therewith culminating the enhancement workflow in a dual-stage execution paradigm. To be more specific, in the preprocessing stage, we initially employ two distinct LLIE methods, namely Night and PairLIE, chosen specifically for their complementary enhancement characteristics, to process the given input low-light image. The resultant outputs, termed pre-enhanced images, serve as dual target images for fusion in the subsequent image fusion stage. Subsequently, at the fusion stage, we utilize an unsupervised UNet architecture to determine the optimal pixel-level weight maps for merging the pre-enhanced images. This process is adeptly directed by a specially formulated loss function in conjunction with the no-reference image quality algorithm, namely the naturalness image quality evaluator (NIQE). Finally, based on a mixed weighting mechanism that combines generated pixel-level local weights with image-level global empirical weights, the pre-enhanced images are fused to produce the final enhanced image. Our experimental findings demonstrate exceptional performance across a range of datasets, surpassing various state-of-the-art methods, including two pre-enhancement methods, involved in the comparison. This outstanding performance is attributed to the harmonious integration of diverse LLIE methods, which yields robust and high-quality enhancement outcomes across various scenarios. Furthermore, our approach exhibits scalability and adaptability, ensuring compatibility with future advancements in enhancement technologies while maintaining superior performance in this rapidly evolving field.
Funder
Natural Science Foundation of China
Reference50 articles.
1. LAU-Net: A low light image enhancer with attention and resizing mechanisms;Lim;Signal Process. Image Commun.,2023
2. Zheng, N., Huang, J., Zhou, M., Yang, Z., Zhu, Q., and Zhao, F. (2023, January 7–14). Learning semantic degradation-aware guidance for recognition-driven unsupervised low-light image enhancement. Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, Washington, DC, USA.
3. Chen, W., and Shah, T. (2021). Exploring Low-light Object Detection Techniques. arXiv.
4. Guo, H., Lu, T., and Wu, Y. (2021, January 10–15). Dynamic Low-Light Image Enhancement for Object Detection via End-to-End Training. Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy.
5. Low-Light Image Enhancement with Multi-Scale Attention and Frequency-Domain Optimization;He;IEEE Trans. Circuits Syst. Video Technol.,2023