SE-RRACycleGAN: Unsupervised Single-Image Deraining Using Squeeze-and-Excitation-Based Recurrent Rain-Attentive CycleGAN
-
Published:2024-07-19
Issue:14
Volume:16
Page:2642
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Wedajew Getachew Nadew1ORCID, Xu Sendren Sheng-Dong12ORCID
Affiliation:
1. Graduate Institute of Automation and Control, National Taiwan University of Science and Technology, Taipei 106, Taiwan 2. Advanced Manufacturing Research Center, National Taiwan University of Science and Technology, Taipei 106, Taiwan
Abstract
In computer vision tasks, the ability to remove rain from a single image is a crucial element to enhance the effectiveness of subsequent high-level tasks in rainy conditions. Recently, numerous data-driven single-image deraining techniques have emerged, primarily relying on paired images (i.e., in a supervised manner). However, when dealing with real deraining tasks, it is common to encounter unpaired images. In such scenarios, removing rain streaks in an unsupervised manner becomes a challenging task, as there are no constraints between images, resulting in suboptimal restoration results. In this paper, we introduce a new unsupervised single-image deraining method called SE-RRACycleGAN, which does not require a paired dataset for training and can effectively leverage the constrained transfer learning capability and cyclic structures inherent in CycleGAN. Since rain removal is closely associated with the analysis of texture features in an input image, we proposed a novel recurrent rain attentive module (RRAM) to enhance rain-related information detection by simultaneously considering both rainy and rain-free images. We also utilize the squeeze-and-excitation enhancement technique to the generator network to effectively capture spatial contextual information among channels. Finally, content loss is introduced to enhance the visual similarity between the input and generated images. Our method excels at removing numerous rain streaks, preserving a smooth background, and closely resembling the ground truth compared to other approaches, based on both quantitative and qualitative results, without the need for paired training images. Extensive experiments on synthetic and real-world datasets demonstrate that our approach shows superiority over most unsupervised state-of-the-art techniques, particularly on the Rain12 dataset (achieving a PSNR of 34.60 and an SSIM of 0.954) and real rainy images (achieving a PSNR of 34.17 and an SSIM of 0.953), and is highly competitive when compared to supervised methods. Moreover, the performance of our model is evaluated using RMSE, FSIM, MAE, and the correlation coefficient, achieving remarkable results that indicate a high degree of accuracy in rain removal and strong preservation of the original image’s structural details.
Funder
Ministry of Science and Technology (MOST), Taiwan
Reference61 articles.
1. Image super-resolution using conditional generative adversarial network;Qiao;IET Image Process.,2019 2. Mao, J., Xiao, T., Jiang, Y., and Cao, Z. (2017, January 21–26). What can help pedestrian detection?. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA. 3. Song, Y., Ma, C., Wu, X., Gong, L., Bao, L., Zuo, W., Shen, C., Lau, R.W., and Yang, M.H. (2018, January 18–23). Vital: Visual tracking via adversarial learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 4. Zhu, Z., Liang, D., Zhang, S., Huang, X., Li, B., and Hu, S. (2016, January 27–30). Traffic-sign detection and classification in the wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. 5. Welding defect detection: Coping with artifacts in the production line;Tripicchio;Int. J. Adv. Manuf. Technol.,2020
|
|