An efficient Transformer with neighborhood contrastive tokenization for hyperspectral images classification
-
Published:2024-07
Issue:
Volume:131
Page:103979
-
ISSN:1569-8432
-
Container-title:International Journal of Applied Earth Observation and Geoinformation
-
language:en
-
Short-container-title:International Journal of Applied Earth Observation and Geoinformation
Author:
Liang MiaomiaoORCID, Zhang Xianhao, Yu XiangchunORCID, Yu Lingjuan, Meng ZheORCID, Zhang Xiaohong, Jiao Licheng
Reference44 articles.
1. Hyperspectral image classification-traditional to deep models: A survey for future prospects;Ahmad;IEEE J. Sel. Top Appl. Earth Obs. Remote Sens.,2021 2. An image is worth 16×16 words: Transformers for image recognition at scale;Dosovitskiy,2020 3. Guo, Y., Stutz, D., Schiele, B., 2023. Robustifying token attention for vision transformers. In: Proceedings of the IEEE International Conference on Computer Vision. CVPR, pp. 17557–17568. 4. Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., Xu, C., 2020. GhostNet: More features from cheap operations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. CVPR, pp. 1580–1589. 5. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R., 2022a. Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. CVPR, pp. 16000–16009.
|
|