INTS-Net: Improved Navigator-Teacher-Scrutinizer Network for Fine-Grained Visual Categorization
-
Published:2023-04-04
Issue:7
Volume:12
Page:1709
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Jin Huilong1, Xie Jiangfan1, Zhao Jia1, Zhang Shuang1, Wen Tian1, Liu Song1, Li Ziteng1
Affiliation:
1. College of Engineering, Hebei Normal University, Shijiazhuang 050024, China
Abstract
Fine-grained image recognition, as a significant branch of computer vision, has become prevalent in various applications in the real world. However, this image recognition is more challenging than general image recognition due to the highly localized and subtle differences in special parts. Up to now, many classic models, including Bilinear Convolutional Neural Networks (Bilinear CNNs), Destruction and Construction Learning (DCL), etc., have emerged to make corresponding improvements. This paper focuses on optimizing the Navigator-Teacher-Scrutinizer Network (NTS-Net). The structure of NTS-Net determines its strong ability to capture subtle information areas. However, research finds that this advantage will lead to a bottleneck of the model’s learning ability. During the training process, the loss value of the training set approaches zero prematurely, which is not conducive to later model learning. Therefore, this paper proposes the INTS-Net model, in which the Stochastic Partial Swap (SPS) method is flexibly added to the feature extractor of NTS-Net. By injecting noise into the model during training, neurons are activated in a more balanced and efficient manner. In addition, we obtain a speedup of about 4.5% in test time by fusing batch normalization and convolution. Experiments conducted on CUB-200-2011 and Stanford cars demonstrate the superiority of INTS-Net.
Funder
Industry-University-Research Innovation Foundation of Chinese University Science and Technology Project of Hebei Education Department
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference39 articles.
1. Fu, J., Zheng, H., and Mei, T. (2017, January 21–26). Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA. 2. Huang, S., Xu, Z., Tao, D., and Zhang, Y. (2016, January 27–30). Part-stacked cnn for fine-grained visual categorization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. 3. Reed, S., Akata, Z., Lee, H., and Schiele, B. (2016, January 27–30). Learning deep representations of fine-grained visual descriptions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. 4. Lin, T.Y., RoyChowdhury, A., and Maji, S. (2015, January 11–18). Bilinear CNN models for fine-grained visual recognition. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile. 5. Sun, M., Yuan, Y., Zhou, F., and Ding, E. (2018, January 8–14). Multi-attention multi-class constraint for fine-grained image recognition. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|