Attention Guide Axial Sharing Mixed Attention (AGASMA) Network for Cloud Segmentation and Cloud Shadow Segmentation
-
Published:2024-07-02
Issue:13
Volume:16
Page:2435
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Gu Guowei1ORCID, Wang Zhongchen1ORCID, Weng Liguo1, Lin Haifeng2ORCID, Zhao Zikai13, Zhao Liling1ORCID
Affiliation:
1. Collaborative Innovation Center on Atmospheric Environment and Equipment Technology, Nanjing University of Information Science and Technology, Nanjing 210044, China 2. College of Information Science and Technology, Nanjing Forestry University, Nanjing 210000, China 3. Department of Computer Science, University of Reading, Whiteknights, Reading RG6 6DH, UK
Abstract
Segmenting clouds and their shadows is a critical challenge in remote sensing image processing. The shape, texture, lighting conditions, and background of clouds and their shadows impact the effectiveness of cloud detection. Currently, architectures that maintain high resolution throughout the entire information-extraction process are rapidly emerging. This parallel architecture, combining high and low resolutions, produces detailed high-resolution representations, enhancing segmentation prediction accuracy. This paper continues the parallel architecture of high and low resolution. When handling high- and low-resolution images, this paper employs a hybrid approach combining the Transformer and CNN models. This method facilitates interaction between the two models, enabling the extraction of both semantic and spatial details from the images. To address the challenge of inadequate fusion and significant information loss between high- and low-resolution images, this paper introduces a method based on ASMA (Axial Sharing Mixed Attention). This approach establishes pixel-level dependencies between high-resolution and low-resolution images, aiming to enhance the efficiency of image fusion. In addition, to enhance the effective focus on critical information in remote sensing images, the AGM (Attention Guide Module) is introduced, to integrate attention elements from original features into ASMA, to alleviate the problem of insufficient channel modeling of the self-attention mechanism. Our experimental results on the Cloud and Cloud Shadow dataset, the SPARCS dataset, and the CSWV dataset demonstrate the effectiveness of our method, surpassing the state-of-the-art techniques for cloud and cloud shadow segmentation.
Funder
National Natural Science Foundation of PR China
Reference67 articles.
1. Observational evidence that cloud feedback amplifies global warming;Ceppi;Proc. Natl. Acad. Sci. USA,2021 2. Chen, K., Dai, X., Xia, M., Weng, L., Hu, K., and Lin, H. (2023). MSFANet: Multi-Scale Strip Feature Attention Network for Cloud and Cloud Shadow Segmentation. Remote Sens., 15. 3. Tracking volcanic sulfur dioxide clouds for aviation hazard mitigation;Carn;Nat. Hazards,2009 4. Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5–9). U-net: Convolutional networks for biomedical image segmentation. Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany. Proceedings, Part III 18. 5. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. (2018, January 8–14). Encoder-decoder with atrous separable convolution for semantic image segmentation. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.
|
|