Few-Shot Image Classification Based on Swin Transformer + CSAM + EMD
-
Published:2024-05-29
Issue:11
Volume:13
Page:2121
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Sun Huadong12, Zhang Pengyi1ORCID, Zhang Xu12, Han Xiaowei12
Affiliation:
1. School of Computer and Information Engineering, Harbin University of Commerce, Harbin 150028, China 2. Heilongjiang Provincial Key Laboratory of Electronic Commerce and Information Processing, Harbin 150028, China
Abstract
In few-shot image classification (FSIC), the feature extraction module of the traditional convolutional neural networks is often constrained by the local nature of the convolutional kernel. As a result, it becomes challenging to handle global information and long-distance dependencies effectively. In order to address this problem, an innovative FSIC method is proposed in this paper, which is the integration of Swin Transformer and CSAM and Earth Mover’s Distance (EMD) technology (STCE). We utilize the Swin Transformer network for image feature extraction, and perform CSAM attention mechanism feature weighting on the output feature map, while we adopt the EMD algorithm to generate the optimal matching flow between the structural units, minimizing the matching cost. This approach allows for a more precise representation of the classification distance between images. We have conducted numerous experiments to validate the effectiveness of our algorithm. On three commonly used few-shot datasets, namely mini-ImageNet, tiered-ImageNet, and FC100, the accuracy of one-shot and five-shot has reached the state of the art (SOTA) in the FSIC; the mini-ImageNet achieves an accuracy of 98.65 ± 0.1% for one-shot and 99.6 ± 0.2% for five-shot tasks, while tiered ImageNet has an accuracy of 91.6 ± 0.1% for one-shot tasks and 96.55 ± 0.27% for five-shot tasks. For FC100, the accuracy is 64.1 ± 0.3% for one-shot tasks and 79.8 ± 0.69% for five-shot tasks. On two commonly used few-shot datasets, namely CUB, CIFAR-FS, CUB achieves an accuracy of 83.1 ± 0.4% for one-shot and 92.88 ± 0.4% for five-shot tasks, while CIFAR-FS achieves an accuracy of 86.95 ± 0.2% for one-shot and 94 ± 0.4% for five-shot tasks.
Funder
Harbin City Science and Technology Plan Projects Basic Research Support Program for Excellent Young Teachers in Provincial Undergraduate Universities in Heilongjiang Province Collaborative Innovation Achievement Program of Double First-class Disciplines in Heilongjiang Province
Reference56 articles.
1. Smoke Recognition in Satellite Imagery via an Attention Pyramid Network With Bidirectional Multilevel Multigranularity Feature Aggregation and Gated Fusion;Tao;IEEE Internet Things J.,2024 2. Learning Discriminative Feature Representation for Estimating Smoke Density of Smoky Vehicle Rear;Tao;IEEE Trans. Intell. Transp. Syst.,2022 3. Liu, M., Huang, X., Mallya, A., Karras, T., Aila, T., Lehtinen, J., and Kautz, J. (November, January 27). Few-shot unsupervised image-to-image translation. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea. 4. Zhang, H., Zhang, J., and Koniusz, P. (2019, January 15–20). Few-shot learning via saliency-guided hallucination of samples. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA. 5. Chen, Z., Fu, Y., Wang, Y., Ma, L., Liu, W., and Hebert, M. (2019, January 15–20). Image deformation meta-networks for one-shot learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.
|
|