A Novel Discriminative Enhancement Method for Few-Shot Remote Sensing Image Scene Classification
-
Published:2023-09-18
Issue:18
Volume:15
Page:4588
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Chen Yanqiao1, Li Yangyang2, Mao Heting2, Liu Guangyuan2ORCID, Chai Xinghua1, Jiao Licheng2
Affiliation:
1. The 54th Research Institute of China Electronics Technology Group Corporation, Shijiazhuang 050081, China 2. Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Joint International Research Laboratory of Intelligent Perception and Computation, International Research Center for Intelligent Perception and Computation, Collaborative Innovation Center of Quantum Information of Shaanxi Province, School of Artificial Intelligence, Xidian University, Xi’an 710071, China
Abstract
Remote sensing image scene classification (RSISC) has garnered significant attention in recent years. Numerous methods have been put forward in an attempt to tackle this issue, particularly leveraging deep learning methods that have shown promising performance in classifying remote sensing image (RSI). However, it is widely recognized that deep learning methods typically require a substantial amount of labeled data to effectively converge. Acquiring a sufficient quantity of labeled data often necessitates significant human and material resources. Hence, few-shot RSISC has become highly meaningful. Fortunately, the recently proposed deep nearest neighbor neural network based on the attention mechanism (DN4AM) model incorporates episodic training and class-related attention mechanisms, effectively reducing the impact of background noise regions on classification results. Nevertheless, the DN4AM model does not address the problem of significant intra-class variability and substantial inter-class similarities observed in RSI scenes. Therefore, the discriminative enhanced attention-based deep nearest neighbor neural network (DEADN4) is proposed to address the few-shot RSISC task. Our method makes three contributions. Firstly, we introduce center loss to enhance the intra-class feature compactness. Secondly, we utilize the deep local-global descriptor (DLGD) to increase inter-class feature differentiation. Lastly, we modify the Softmax loss by incorporating cosine margin to amplify the inter-class feature dissimilarity. Experiments are conducted on three diverse RSI datasets to gauge the efficacy of our approach. Through comparative analysis with various cutting-edge methods including MatchingNet, RelationNet, MAML, Meta-SGD, DN4, and DN4AM, our approach showcases promising outcomes in the few-shot RSISC task.
Funder
National Natural Science Foundation of China Research Project of SongShan Laboratory Natural Science Basic Research Program of Shaanxi Fund for Foreign Scholars in University Research and Teaching Programs
Subject
General Earth and Planetary Sciences
Reference51 articles.
1. Jiang, N., Shi, H., and Geng, J. (2022). Multi-Scale Graph-Based Feature Fusion for Few-Shot Remote Sensing Image Scene Classification. Remote Sens., 14. 2. Xing, S., Xing, J., Ju, J., Hou, Q., and Ding, X. (2022). Collaborative Consistent Knowledge Distillation Framework for Remote Sensing Image Scene Classification Network. Remote Sens., 14. 3. WRMatch: Improving FixMatch with Weighted Nuclear-Norm Regularization for Few-Shot Remote Sensing Scene Classification;Xiong;IEEE Trans. Geosci. Remote Sens.,2022 4. Bai, T., Wang, H., and Wen, B. (2022). Targeted Universal Adversarial Examples for Remote Sensing. Remote Sens., 14. 5. Muhammad, U., Hoque, M., Wang, W., and Oussalah, M. (2022). Patch-Based Discriminative Learning for Remote Sensing Scene Classification. Remote Sens., 14.
|
|