Author:
Yang Yuanbo,Lv Qunbo,Zhu Baoyu,Sui Xuefu,Zhang Yu,Tan Zheng
Abstract
Haze and mist caused by air quality, weather, and other factors can reduce the clarity and contrast of images captured by cameras, which limits the applications of automatic driving, satellite remote sensing, traffic monitoring, etc. Therefore, the study of image dehazing is of great significance. Most existing unsupervised image-dehazing algorithms rely on a priori knowledge and simplified atmospheric scattering models, but the physical causes of haze in the real world are complex, resulting in inaccurate atmospheric scattering models that affect the dehazing effect. Unsupervised generative adversarial networks can be used for image-dehazing algorithm research; however, due to the information inequality between haze and haze-free images, the existing bi-directional mapping domain translation model often used in unsupervised generative adversarial networks is not suitable for image-dehazing tasks, and it also does not make good use of extracted features, which results in distortion, loss of image details, and poor retention of image features in the haze-free images. To address these problems, this paper proposes an end-to-end one-sided unsupervised image-dehazing network based on a generative adversarial network that directly learns the mapping between haze and haze-free images. The proposed feature-fusion module and multi-scale skip connection based on residual network consider the loss of feature information caused by convolution operation and the fusion of different scale features, and achieve adaptive fusion between low-level features and high-level features, to better preserve the features of the original image. Meanwhile, multiple loss functions are used to train the network, where the adversarial loss ensures that the network generates more realistic images and the contrastive loss ensures a meaningful one-sided mapping from the haze image to the haze-free image, resulting in haze-free images with good quantitative metrics and visual effects. The experiments demonstrate that, compared with existing dehazing algorithms, our method achieved better quantitative metrics and better visual effects on both synthetic haze image datasets and real-world haze image datasets.
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference58 articles.
1. Haze Removal Using Radial Basis Function Networks for Visibility Restoration Applications;Chen;IEEE Trans. Neural Netw. Learn. Syst.,2017
2. Qian, R., Tan, R.T., Yang, W., Su, J., and Liu, J. (2018, January 18–22). Attentive Generative Adversarial Network for Raindrop Removal from a Single Image. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.
3. Wang, G., Luo, C., Xiong, Z., and Zeng, W. (2019, January 16–20). Spm-Tracker: Series-Parallel Matching for Real-Time Visual Object Tracking. Proceedings of the IEEE/CVF Conference on Computer Vision And Pattern Recognition, Long Beach, CA, USA.
4. Object Detection with Deep Learning: A Review;Zhao;IEEE Trans. Neural Netw. Learn. Syst.,2019
5. Optics of the Atmosphere: Scattering by Molecules and Particles;McCartney;Phys. Bull.,1977