Affiliation:
1. Faculty of Sciences of Monastir, Electronics and Microelectronics Laboratory, Monastir University, Monastir 5000, Tunisia
2. Higher Institute of Applied Sciences and Technology of Sousse, Sousse University, Sousse 4000, Tunisia
Abstract
Deep network in network (DNIN) model is an efficient instance and an important extension of the convolutional neural network (CNN) consisting of alternating convolutional layers and pooling layers. In this model, a multilayer perceptron (MLP), a nonlinear function, is exploited to replace the linear filter for convolution. Increasing the depth of DNIN can also help improve classification accuracy while its formation becomes more difficult, learning time gets slower, and accuracy becomes saturated and then degrades. This paper presents a new deep residual network in network (DrNIN) model that represents a deeper model of DNIN. This model represents an interesting architecture for on-chip implementations on FPGAs. In fact, it can be applied to a variety of image recognition applications. This model has a homogeneous and multilength architecture with the hyperparameter “L” (“L” defines the model length). In this paper, we will apply the residual learning framework to DNIN and we will explicitly reformulate convolutional layers as residual learning functions to solve the vanishing gradient problem and facilitate and speed up the learning process. We will provide a comprehensive study showing that DrNIN models can gain accuracy from a significantly increased depth. On the CIFAR-10 dataset, we evaluate the proposed models with a depth of up to L = 5 DrMLPconv layers, 1.66x deeper than DNIN. The experimental results demonstrate the efficiency of the proposed method and its role in providing the model with a greater capacity to represent features and thus leading to better recognition performance.
Funder
Electronics and Microelectronics Laboratory
Subject
General Mathematics,General Medicine,General Neuroscience,General Computer Science
Reference48 articles.
1. Deep network in network
2. Batch normalization: accelerating deep network training by reducing internal covariate shift;S. Ioffe,2015
3. Deep Networks with Stochastic Depth
4. Understanding the difficulty of training deep feedforward neural networks;Y. Bengio;Proceedings of AISTATS,2010
5. Delving deep into rectifiers: surpassing human-level performance on imagenet classification;K. He,2015
Cited by
29 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献