Author:
Guo Jingxia,Jia Nan,Bai Jinniu
Abstract
AbstractRecently, the scenes in large high-resolution remote sensing (HRRS) datasets have been classified using convolutional neural network (CNN)-based methods. Such methods are well-suited for spatial feature extraction and can classify images with relatively high accuracy. However, CNNs do not adequately learn the long-distance dependencies between images and features in image processing, despite this being necessary for HRRS image processing as the semantic content of the scenes in these images is closely related to their spatial relationship. CNNs also have limitations in solving problems related to large intra-class differences and high inter-class similarity. To overcome these challenges, in this study we combine the channel-spatial attention (CSA) mechanism with the Vision Transformer method to propose an effective HRRS image scene classification framework using Channel-Spatial Attention Transformers (CSAT). The proposed model extracts the channel and spatial features of HRRS images using CSA and the Multi-head Self-Attention (MSA) mechanism in the transformer module. First, the HRRS image is mapped into a series of multiple planar 2D patch vectors after passing to the CSA. Second, the ordered vector is obtained via the linear transformation of each vector, and the position and learnable embedding vectors are added to the sequence vector to capture the inter-feature dependencies at a distance from the generated image. Next, we use MSA to extract image features and the residual network structure to complete the encoder construction to solve the gradient disappearance problem and avoid overfitting. Finally, a multi-layer perceptron is used to classify the scenes in the HRRS images. The CSAT network is evaluated using three public remote sensing scene image datasets: UC-Merced, AID, and NWPU-RESISC45. The experimental results show that the proposed CSAT network outperforms a selection of state-of-the-art methods in terms of scene classification.
Funder
National Natural Science Foundation of China
Natural Science Foundation of Inner Mongolia Autonomous Region
Publisher
Springer Science and Business Media LLC
Reference56 articles.
1. Wang, Q. et al. Ship detection based on fused features and rebuilt YOLOv3 networks in optical remote-sensing images. Int J. Remote Sens. 42, 520–536 (2021).
2. Liu, H. et al. DE-Net: Deep encoding network for building extraction from high-resolution remote sensing imagery. Remote Sens. 11, 2380 (2019).
3. Ren, Y., Yu, Y. & Guan, H. DA-CapsUNet: A dual-attention capsule U-net for road extraction from remote sensing imagery. Remote Sens. 12, 2866 (2020).
4. Huang, X., Chen, H. & Gong, J. Angular difference feature extraction for urban scene classification using ZY-3 multi-angle high-resolution satellite imagery. ISPRS J. Photogramm. Remote Sens. 135, 127–141 (2018).
5. Han, W. et al. Methods for small, weak object detection in optical high-resolution remote sensing images: A survey of advances and challenges. IEEE Geosci. Remote Sens. 14, 11737–11749 (2021).
Cited by
24 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献