Spectral-Spatial Center-Aware Bottleneck Transformer for Hyperspectral Image Classification
-
Published:2024-06-13
Issue:12
Volume:16
Page:2152
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Zhang Meng12ORCID, Yang Yi12, Zhang Sixian12, Mi Pengbo12, Han Deqiang3
Affiliation:
1. State Key Laboratory for Strength and Vibration of Mechanical Structures, Xi’an Jiaotong University, Xi’an 710049, China 2. School of Aerospace Engineering, Xi’an Jiaotong University, Xi’an 710049, China 3. School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an 710049, China
Abstract
Hyperspectral image (HSI) contains abundant spectral-spatial information, which is widely used in many fields. HSI classification is a fundamental and important task, which aims to assign each pixel a specific class label. However, the high spectral variability and the limited labeled samples create challenges for HSI classification, which results in poor data separability and makes it difficult to learn highly discriminative semantic features. In order to address the above problems, a novel spectral-spatial center-aware bottleneck Transformer is proposed. First, the highly relevant spectral information and the complementary spatial information at different scales are integrated to reduce the impact caused by the high spectral variability and enhance the HSI’s separability. Then, the feature correction layer is designed to model the cross-channel interactions, thereby promoting the effective cooperation between different channels to enhance overall feature representation capability. Finally, the center-aware self-attention is constructed to model the spatial long-range interactions and focus more on the neighboring pixels that have relatively consistent spectral-spatial properties with the central pixel. Experimental results on the common datasets show that compared with the state-of-the-art classification methods, S2CABT has the better classification performance and robustness, which achieves a good compromise between the complexity and the performance.
Funder
National Natural Science Foundation of China
Reference61 articles.
1. Sun, H., Wang, L., Liu, H., and Sun, Y. (2024). Hyperspectral image classification with the orthogonal self-attention ResNet and two-step support vector machine. Remote Sens., 16. 2. Yang, J., Qin, J., Qian, J., Li, A., and Wang, L. (2024). AL-MRIS: An active learning-based multipath residual involution siamese network for few-shot hyperspectral image classification. Remote Sens., 16. 3. Guo, H., and Liu, W. (2024). S3L: Spectrum Transformer for self-supervised learning in hyperspectral image classification. Remote Sens., 16. 4. Cui, B., Wen, J., Song, X., and He, J. (2023). MADANet: A lightweight hyperspectral image classification network with multiscale feature aggregation and a dual attention mechanism. Remote Sens., 15. 5. Diao, Q., Dai, Y., Wang, J., Feng, X., Pan, F., and Zhang, C. (2024). Spatial-pooling-based graph attention U-Net for hyperspectral image classification. Remote Sens., 16.
|
|