Affiliation:
1. Department of Artificial Intelligence School of Artificial Intelligence (School of Future Technology) Nanjing University of Information Science & Technology Nanjing China
Abstract
AbstractThe authors design a novel convolutional network architecture, that is, deep network with double reuses and convolutional shortcuts, in which new compressed reuse units are presented. Compressed reuse units combine the reused features from the first 3 × 3 convolutional layer and the features from the last 3 × 3 convolutional layer to produce new feature maps in the current compressed reuse unit, simultaneously reuse the feature maps from all previous compressed reuse units to generate a shortcut by an 1 × 1 convolution, and then concatenate these new maps and this shortcut as the input to next compressed reuse unit. Deep network with double reuses and convolutional shortcuts uses the feature reuse concatenation from all compressed reuse units as the final features for classification. In deep network with double reuses and convolutional shortcuts, the inner‐ and outer‐unit feature reuses and the convolutional shortcut compressed from the previous outer‐unit feature reuses can alleviate the vanishing‐gradient problem by strengthening the forward feature propagation inside and outside the units, improve the effectiveness of features and reduce calculation cost. Experimental results on CIFAR‐10, CIFAR‐100, ImageNet ILSVRC 2012, Pascal VOC2007 and MS COCO benchmark databases demonstrate the effectiveness of authors’ architecture for object recognition and detection, as compared with the state‐of‐the‐art.
Funder
National Natural Science Foundation of China
Publisher
Institution of Engineering and Technology (IET)
Subject
Computer Vision and Pattern Recognition,Software