Affiliation:
1. University of Nottingham Malaysia Campus
Abstract
Abstract
Lifelong learning or incremental learning in convolutional neural networks (CNNs) has encountered a challenge known as catastrophic forgetting, which impairs model performance when tasks are presented sequentially. While a simple approach of retraining the model with all previously seen training data can alleviate this issue to some extent, it is not scalable due to the rapid accumulation of storage requirements and retraining time. To address this challenge, we propose a novel incremental learning strategy involving image data generation and exemplar selection. Specifically, we introduce a new type of autoencoder called the Perceptual Autoencoder, which reconstructs previously seen data while significantly compressing it, requiring no retraining when new classes are introduced. The latent feature map from the undercomplete Perceptual Autoencoder is stored and utilized to reconstruct training data for replay alongside new class data when necessary. Additionally, we employ example forgetting as an exemplar detection metric for exemplar selection, aiming to minimize the number of old task training data while preserving model performance. Our proposed strategy achieves state-of-the-art performance on both CIFAR-100 and ImageNet-100 datasets.
Publisher
Research Square Platform LLC