Affiliation:
1. College of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650031, China
2. Yunnan Key Laboratory of Computer Science, Kunming University of Science and Technology, Kunming 650500, China
Abstract
Image fusion is a pivotal image-processing technology designed to merge multiple images from various sensors or imaging modalities into a single composite image. This process enhances and extracts the information contained across the images, resulting in a final image that is more informative and of superior quality. This paper introduces a novel method for infrared and visible image fusion, utilizing nested connections and frequency-domain decomposition techniques to effectively solve the problem of lost image detail features. By incorporating depthwise separable convolution technology, the method reduces the computational complexity and model size, thereby increasing computational efficiency. A multi-scale residual fusion network, R2FN (Res2Net Fusion Network), has been designed to replace traditional manually designed fusion strategies, enabling the network to better preserve detail information in the image while improving the quality of the fused image. Moreover, a new loss function is proposed, which is aimed at enhancing important feature information while preserving more significant features. Experimental results on public datasets indicate that the method not only retains the detail information of visible-light images but also highlights the significant features of infrared images while maintaining a minimal number of parameters.
Funder
Key Projects of Basic Research Program in Yunnan Province
National Natural Science Foundation of China
Development Fund of Key Laboratory of Computer Technology Application in Yunnan Province