Abstract
Few-shot learning has achieved great success in computer vision. However, when applied to Synthetic Aperture Radar Automatic Target Recognition (SAR-ATR), it tends to demonstrate a bad performance due to the ignorance of the differences between SAR images and optical ones. What is more, the same transformation on both images may cause different results, even some unexpected noise. In this paper, we propose an improved Prototypical Network (PN) based on Spatial Transformation, also known as ST-PN. Cascaded after the last convolutional layer, a spatial transformer module implements a feature-wise alignment rather than a pixel-wise one, so more semantic information can be exploited. In addition, there is always a huge divergence even for the same target when it comes to pixel-wise alignment. Moreover, it reduces computational cost with fewer parameters of the deeper layer. Here, a rotation transformation is used to reduce the discrepancies caused by different observation angles of the same class. Thefinal comparison of four extra losses indicates that a single cross-entropy loss is good enough to calculate the loss of distances. Our work achieves state-of-the-art performance on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset.
Subject
General Earth and Planetary Sciences
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献