Abstract
AbstractIncrementally learning new information from a non-stationary stream of data, referred to as ‘continual learning’, is a key feature of natural intelligence, but a challenging problem for deep neural networks. In recent years, numerous deep learning methods for continual learning have been proposed, but comparing their performances is difficult due to the lack of a common framework. To help address this, we describe three fundamental types, or ‘scenarios’, of continual learning: task-incremental, domain-incremental and class-incremental learning. Each of these scenarios has its own set of challenges. To illustrate this, we provide a comprehensive empirical comparison of currently used continual learning strategies, by performing the Split MNIST and Split CIFAR-100 protocols according to each scenario. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of the effectiveness of different strategies. The proposed categorization aims to structure the continual learning field, by forming a key foundation for clearly defining benchmark problems.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computer Networks and Communications,Computer Vision and Pattern Recognition,Human-Computer Interaction,Software
Reference75 articles.
1. Chen, Z. & Liu, B. Lifelong machine learning. Synth. Lect. Artif. Intell. Mach. Learn. 12, 1–207 (2018).
2. Hadsell, R., Rao, D., Rusu, A. A. & Pascanu, R. Embracing change: continual learning in deep neural networks. Trends Cognit. Sci. 24, 1028–1040 (2020).
3. McCloskey, M. & Cohen, N. J. In Psychology of Learning and Motivation Vol. 24, 109–165 (Elsevier, 1989).
4. French, R. M. Catastrophic forgetting in connectionist networks. Trends Cognit. Sci. 3, 128–135 (1999).
5. Kudithipudi, D. et al. Biological underpinnings for lifelong learning machines. Nat. Mach. Intell. 4, 196–210 (2022).
Cited by
134 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献