Abstract
This paper presents a subwindow variance filtering algorithm for fusing infrared and visible light images, with the goal of addressing challenges related to blurred details, low contrast, and missing edge features. First, images to be fused are subjected to multilevel decomposition using a subwindow variance filter, resulting in corresponding base and multiple detail layers. PCANet extracts features from the base layer and obtains corresponding weight maps that guide the fusion process. A saliency measurement method is proposed for detail‐level fusion to extract saliency maps from the source image. The saliency maps should be compared in order to obtain the initial weight map, which is then optimized using guided filtering technology to guide the fusion of detail layers. Finally, the information of the base layer and the detail layer after fusion is superimposed to obtain an ideal fusion result. The proposed algorithm is evaluated through subjective and objective measures, including information entropy, mutual information, multiscale structural similarity measurement, standard deviation, and visual information fidelity. The results demonstrate that the proposed algorithm achieves rich detail information, high contrast, and good edge information retention, making it a promising approach for infrared and visible image fusion.
Funder
National Natural Science Foundation of China
Natural Science Foundation of Chongqing
Reference21 articles.
1. Near-infrared and visible dual channel sensor information fusion;Shen Y.;Spectroscopy and Spectral Analysis,2019
2. Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network
3. Deep learning for pixel-level image fusion: Recent advances and future prospects
4. Dual-scale decomposition and saliency analysis based infrared and visible image fusion;Huo X.;Journal of Image and Graphics,2021
5. Infrared and visible light image fusion based on Tetrolet framework;Feng X.;Acta Photonica Sinica,2019