Real-Time Low-Light Imaging in Space Based on the Fusion of Spatial and Frequency Domains
-
Published:2023-12-15
Issue:24
Volume:12
Page:5022
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Wu Jiaxin123, Zhang Haifeng13, Li Biao13, Duan Jiaxin134, Li Qianxi123, He Zeyu123, Cao Jianzhong13, Wang Hao13
Affiliation:
1. Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China 2. University of Chinese Academy of Sciences, Beijing 100049, China 3. Xi’an Key Laboratory of Spacecraft Optical Imaging and Measurement Technology, Xi’an 710119, China 4. School of Opto-Electronical Engineering, Xi’an Technological University, Xi’an 710021, China
Abstract
Due to the low photon count in space imaging and the performance bottlenecks of edge computing devices, there is a need for a practical low-light imaging solution that maintains satisfactory recovery while offering lower network latency, reduced memory usage, fewer model parameters, and fewer operation counts. Therefore, we propose a real-time deep learning framework for low-light imaging. Leveraging the parallel processing capabilities of the hardware, we perform the parallel processing of the image data from the original sensor across branches with different dimensionalities. The high-dimensional branch conducts high-dimensional feature learning in the spatial domain, while the mid-dimensional and low-dimensional branches perform pixel-level and global feature learning through the fusion of the spatial and frequency domains. This approach ensures a lightweight network model while significantly improving the quality and speed of image recovery. To adaptively adjust the image based on brightness and avoid the loss of detailed pixel feature information, we introduce an adaptive balancing module, thereby greatly enhancing the effectiveness of the model. Finally, through validation on the SID dataset and our own low-light satellite dataset, we demonstrate that this method can significantly improve image recovery speed while ensuring image recovery quality.
Funder
Shaanxi provincial fund
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference33 articles.
1. Zhang, X., Shen, P., Luo, L., Zhang, L., and Song, J. (2012, January 11–15). Enhancement and noise reduction of very low light level images. Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan. 2. Gu, S., Li, Y., Van Gool, L., and Timofte, R. (2019, January 27–28). Self-Guided Network for Fast Image Denoising. Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea. 3. Xu, K., Yang, X., Yin, B., and Lau, R.W. (2020, January 13–19). Learning to Restore Low-Light Images via Decomposition-and-Enhancement. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA. 4. Atoum, Y., Ye, M., Ren, L., Tai, Y., and Liu, X. (2020, January 14–19). Color-wise Attention Network for Low-light Image Enhancement. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA. 5. Ai, S., and Kwon, J. (2020). Extreme Low-Light Image Enhancement for Surveillance Cameras Using Attention U-Net. Sensors, 20.
|
|