Abstract
AbstractFew-shot learning (FSL) approaches, mostly neural network-based, assume that pre-trained knowledge can be obtained from base (seen) classes and transferred to novel (unseen) classes. However, the black-box nature of neural networks makes it difficult to understand what is actually transferred, which may hamper FSL application in some risk-sensitive areas. In this paper, we reveal a new way to perform FSL for image classification, using a visual representation from the backbone model and patterns generated by a self-attention based explainable module. The representation weighted by patterns only includes a minimum number of distinguishable features and the visualized patterns can serve as an informative hint on the transferred knowledge. On three mainstream datasets, experimental results prove that the proposed method can enable satisfying explainability and achieve high classification results. Code is available athttps://github.com/wbw520/MTUNet.
Funder
Japan Society for the Promotion of Science
Council for Science, Technology and Innovation
cross-ministerial Strategic Innovation Promotion Program
Innovative AI Hospital System
JST FOREST
Publisher
Springer Science and Business Media LLC
Reference68 articles.
1. Wang Y, Yao Q, Kwok JT, Ni LM (2020) Generalizing from a few examples: a survey on few-shot learning. ACM Comput Surv 53(3):1–34
2. Yue Z, Zhang H, Sun Q, Hua X-S (2020) Interventional few-shot learning. NeurIPS 33:2734–2746
3. Vinyals O, Blundell C, Lillicrap T, Wierstra D, et al. (2016) Matching networks for one shot learning. In: Proceeding NeurIPS, pp 3630–3638
4. Wang B, Li L, Verma M, Nakashima Y, Kawasaki R, Nagahara H (2021) MTUNEt: few-shot image classification with visual explanations. In: Proceeding CVPR workshops, pp 2294–2298
5. Prabhu VU (2019) Few-shot learning for dermatological disease diagnosis. PhD thesis, Georgia institute of technology
Cited by
19 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献