Affiliation:
1. State Grid UHV Transmission Co. of SEPC, Taiyuan, Shanxi, 030000, China
Abstract
It is well known in image recognition that global features represent the overall and have the ability to generalize an entire object, while local features can reflect the details, both of which are important for extracting more discriminative features. Recent research has shown that the performance of convolutional neural networks can be improved by introducing an attention module. In this paper, we propose a simple and effective channel attention module named layer feature that meets channel attention module (LC module, LCM), which combines the layer global information with channel dependence to calibrate the correlation between channel features and then adaptively recalibrates channel-wise feature responses. Compared with the traditional channel attention methods, the LC module utilizes the most significant information that needs to be focused on in the overall features to refine the channel relationship. Through empirical studies on CIFAR-10, CIFAR-100, and mini-ImageNet, this work proved its superiority compared to other attention modules in different DCNNs. Furthermore, we performed the two-dimensional visualization of the feature map through the class activation map and intuitively analyzed the effectiveness of the model.
Funder
Science and Technology Project of State Grid Shanxi Electric Power Company
Subject
Electrical and Electronic Engineering,Instrumentation,Control and Systems Engineering
Reference31 articles.
1. Deep residual learning for image recognition;K. He
2. Densely connected convolutional networks;G. Huang
3. Faster r-cnn: towards real-time object detection with region proposal networks;S. Ren,2015
4. Focal loss for dense object detection;T.-Y. Lin
5. A twofold siamese network for real-time object tracking;A. He
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献