Abstract
At present, deep-learning-based infrared and visible image fusion methods have the problem of extracting insufficient source image features, causing imbalanced infrared and visible information in fused images. To solve the problem, a multiscale feature pyramid network based on activity level weight selection (MFPN-AWS) with a complete downsampling–upsampling structure is proposed. The network consists of three parts: a downsampling convolutional network, an AWS fusion layer, and an upsampling convolutional network. First, multiscale deep features are extracted by downsampling convolutional networks, obtaining rich information of intermediate layers. Second, AWS highlights the advantages of the
l
1
-norm and global pooling dual fusion strategy to describe the characteristics of target saliency and texture detail, and effectively balances the multiscale infrared and visible features. Finally, multiscale fused features are reconstructed by the upsampling convolutional network to obtain fused images. Compared with nine state-of-the-art methods via the publicly available experimental datasets TNO and VIFB, MFPN-AWS reaches more natural and balanced fusion results, such as better overall clarity and salient targets, and achieves optimal values on two metrics: mutual information and visual fidelity.
Funder
Shanghai Special Plan for Local Colleges and Universities for Capacity Building
National Natural Science Foundation of China
Subject
Computer Vision and Pattern Recognition,Atomic and Molecular Physics, and Optics,Electronic, Optical and Magnetic Materials
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献