Affiliation:
1. School of Information Science and Technology, Northwest University, Xi’an 710127, China
2. School of Physics and Photoelectric Engineering, Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou 310012, China
3. Department of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China
Abstract
Remote sensing images are very vulnerable to cloud interference during the imaging process. Cloud occlusion, especially thick cloud occlusion, significantly reduces the imaging quality of remote sensing images, which in turn affects a variety of subsequent tasks using the remote sensing images. The remote sensing images miss ground information due to thick cloud occlusion. The thick cloud removal method based on a temporality global–local structure is initially suggested as a solution to this problem. This method includes two stages: the global multi-temporal feature fusion (GMFF) stage and the local single-temporal information restoration (LSIR) stage. It adopts the fusion feature of global multi-temporal to restore the thick cloud occlusion information of local single temporal images. Then, the featured global–local structure is created in both two stages, fusing the global feature capture ability of Transformer with the local feature extraction ability of CNN, with the goal of effectively retaining the detailed information of the remote sensing images. Finally, the local feature extraction (LFE) module and global–local feature extraction (GLFE) module is designed according to the global–local characteristics, and the different module details are designed in this two stages. Experimental results indicate that the proposed method performs significantly better than the compared methods in the established data set for the task of multi-temporal thick cloud removal. In the four scenes, when compared to the best method CMSN, the peak signal-to-noise ratio (PSNR) index improved by 2.675, 5.2255, and 4.9823 dB in the first, second, and third temporal images, respectively. The average improvement of these three temporal images is 9.65%. In the first, second, and third temporal images, the correlation coefficient (CC) index improved by 0.016, 0.0658, and 0.0145, respectively, and the average improvement for the three temporal images is 3.35%. Structural similarity (SSIM) and root mean square (RMSE) are improved 0.33% and 34.29%, respectively. Consequently, in the field of multi-temporal cloud removal, the proposed method enhances the utilization of multi-temporal information and achieves better effectiveness of thick cloud restoration.
Funder
National Natural Science Foundation of China
Key Research and Development Program of Shaanxi Province of China
Research Funds of Hangzhou Institute for Advanced Study
Subject
General Earth and Planetary Sciences
Reference31 articles.
1. Spatial and temporal distribution of clouds observed by MODIS onboard the Terra and Aqua satellites;King;IEEE Trans. Geosci. Remote Sens.,2013
2. Thick cloud removal in optical remote sensing images using a texture complexity guided self-paced learning method;Tao;IEEE Trans. Geosci. Remote Sens.,2022
3. A Deep Unfolded Prior-Aided RPCA Network for Cloud Removal;Imran;IEEE Signal Process. Lett.,2022
4. Attention mechanism-based generative adversarial networks for cloud removal in Landsat images;Xu;Remote Sens. Environ.,2022
5. Attention is all you need;Vaswani;Adv. Neural Inf. Process. Syst.,2017
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献