Affiliation:
1. Department of CSE, Sri Sivasubramaniya Nadar College of Engineering, Chennai, India
Abstract
Deep neural networks can be used to perform nonlinear operations at multiple levels, such as a neural network that is composed of many hidden layers. Although deep learning approaches show good results, they have a drawback called catastrophic forgetting, which is a reduction in performance when a new class is added. Incremental learning is a learning method where existing knowledge should be retained even when new data is acquired. It involves learning with multiple batches of training data and the newer learning sessions do not require the data used in the previous iterations. The Bayesian approach to incremental learning uses the concept of the probability distribution of weights. The key idea of Bayes theorem is to find an updated distribution of weights and biases. In the Bayesian framework, the beliefs can be updated iteratively as the new data comes in. Bayesian framework allows to update the beliefs iteratively in real-time as data comes in. The Bayesian model for incremental learning showed an accuracy of 82%. The execution time for the Bayesian model was lesser on GPU (670 s) when compared to CPU (1165 s).
Subject
Computer Science Applications,General Engineering,Modeling and Simulation
Reference22 articles.
1. Andrade M, Gasca E, Rend´on E (2017) Implementation of incremental learning in artificial neural networks. In: Global conference on artificial intelligence, Miami, USA, 2017, October. pp. 221–232, Vol. 50.
2. A Survey on Deep-Learning Architectures
3. Bayesian Machine Learning (2016). http://fastml.com/bayesian-machine-learning (accessed 10 June 2020)
4. Castro FM, Mar´ın-Jim´enez MJ, Guil N, et al. (2018). End-to-end incremental learning. In: Proceedings of the European conference on computer vision, Glasgow, UK, 23–28 August 2020, pp. 233–248.
5. Performance Analysis of Google Colaboratory as a Tool for Accelerating Deep Learning Applications