Affiliation:
1. School of Automation and Electronic Information Xiangtan University Xiangtan China
2. School of Geomatics Liaoning Technical University Fuxin Liaoning China
Abstract
AbstractCurrent dehazing networks usually only learn haze features in a single‐image colour space and often suffer from uneven dehazing, colour, and edge degradation when confronted with different scales of ground objects in the depth space of the scene. The authors propose a multimodal feature fusion image dehazing method with scene depth prior based on a decoder–encoder backbone network. The multimodal feature fusion module was first designed. In this module, affine transformation and polarized self‐attention mechanism are used to realize the fusion of image colour and depth prior feature, to improve the representation ability of the model for different scale ground haze feature in‐depth space. Then, the feature enhancement module (FEM) is added, and deformable convolution and difference convolution methods are used to enhance the representation ability of the model for the geometric and texture feature of the ground objects. The publicly available dehazing datasets are used for comparison and ablation experiments. The results show that compared with the existing classical dehazing networks, the peak signal‐to‐noise ratio (PSNR) and SSIM of the authors’ proposed method have been significantly improved, have a more uniform dehazing effect in different depth spaces, and maintain the colour and edge details of the ground objects very well.
Publisher
Institution of Engineering and Technology (IET)
Subject
Electrical and Electronic Engineering,Computer Vision and Pattern Recognition,Signal Processing,Software
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献