NLOCL: Noise-Labeled Online Continual Learning
-
Published:2024-06-29
Issue:13
Volume:13
Page:2560
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Cheng Kan1, Ma Yongxin2, Wang Guanglu2, Zong Linlin2, Liu Xinyue2
Affiliation:
1. China Academy of Space Technology, Beijing 100039, China 2. School of Software, Dalian University of Technology, Dalian 116024, China
Abstract
Continual learning (CL) from infinite data streams has become a challenge for neural network models in real-world scenarios. Catastrophic forgetting of previous knowledge occurs in this learning setting, and existing supervised CL methods rely excessively on accurately labeled samples. However, the real-world data labels are usually misled by noise, which influences the CL agents and aggravates forgetting. To address this problem, we propose a method named noise-labeled online continual learning (NLOCL), which implements the online CL model with noise-labeled data streams. NLOCL uses an empirical replay strategy to retain crucial examples, separates data streams by small-loss criteria, and includes semi-supervised fine-tuning for labeled and unlabeled samples. Besides, NLOCL combines small loss with class diversity measures and eliminates online memory partitioning. Furthermore, we optimized the experience replay stage to enhance the model performance by retaining significant clean-labeled examples and carefully selecting suitable samples. In the experiment, we designed noise-labeled data streams by injecting noisy labels into multiple datasets and partitioning tasks to simulate infinite data streams realistically. The experimental results demonstrate the superior performance and robust learning capabilities of our proposed method.
Funder
Social Science Planning Foundation of Liaoning Province under Grant
Reference42 articles.
1. De Lange, M., Aljundi, R., Masana, M., Parisot, S., Jia, X., Leonardis, A., Slabaugh, G., and Tuytelaars, T. (2019). Continual learning: A comparative study on how to defy forgetting in classification tasks. arXiv. 2. He, J., Mao, R., Shao, Z., and Zhu, F. (2020, January 13–19). Incremental learning in online scenario. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA. 3. Grossberg, S. (1982). Studies of Mind and Brain: Neural Principles of Learning, Perception, Development, Cognition, and Motor Control, Springer Science & Business Media. 4. Online continual learning in image classification: An empirical survey;Mai;Neurocomputing,2022 5. Chen, Z., and Liu, B. (2018). Lifelong Machine Learning, Springer.
|
|