A Novel Saliency-Based Decomposition Strategy for Infrared and Visible Image Fusion
-
Published:2023-05-18
Issue:10
Volume:15
Page:2624
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Qi Biao1, Bai Xiaotian2, Wu Wei2, Zhang Yu1, Lv Hengyi1ORCID, Li Guoning1
Affiliation:
1. Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China 2. University of Chinese Academy of Sciences, Beijing 100049, China
Abstract
The image decomposition strategy that extracts salient features from the source image is crucial for image fusion. To this end, we proposed a novel saliency-based decomposition strategy for infrared and visible image fusion. In particular, the latent low-rank representation (LatLRR) and rolling guidance filter (RGF) are together employed to process source images, which is called DLatLRR_RGF. In this method, the source images are first decomposed to salient components and base components based on LatLRR, and the salient components are filtered by RGF. Then, the final base components can be calculated by the difference between the source image and the processed salient components. The fusion rule based on the nuclear-norm and modified spatial frequency is used to fuse the salient components. The base components are fused by the l2-energy minimization model. Finally, the fused image can be obtained by the fused base components and saliency detail components. Multiple groups of experiments on different pairs of infrared and visible images demonstrate that, compared with other state-of-the-art fusion algorithms, our proposed method possesses superior fusion performance from subjective and objective perspectives.
Funder
National Natural Science Foundation of China
Subject
General Earth and Planetary Sciences
Reference53 articles.
1. Sensor Data Fusion Algorithms for Vehicular Cyber-Physical Systems;Miloslavov;IEEE Trans. Parallel Distrib. Syst.,2012 2. Kristan, M., Matas, J., Leonardis, A., Felsberg, M., Pflugfelder, R., Kämäräinen, J.-K., Zajc, L.C., Drbohlav, O., Lukezic, A., and Berg, A. (2019, January 27–28). The Seventh Visual Object Tracking VOT2019 Challenge Results. Proceedings of the International Conference on Computer Vision Workshop (ICCVW), Seoul, Republic of Korea. 3. Modality-correlation-aware sparse representation for RGB-infrared object tracking;Lan;Pattern Recognit. Lett.,2020 4. Shrinidhi, V., Yadav, P., and Venkateswaran, N. (2018, January 22–24). IR and visible video fusion for surveillance. Proceedings of the 2018 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), Chennai, India. 5. FusionGAN: A generative adversarial network for infrared and visible image fusion;Ma;Inf. Fusion,2018
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|