Abstract
Abstract
We introduce a novel continual learning method based on multifidelity deep neural networks. This method learns the correlation between the output of previously trained models and the desired output of the model on the current training dataset, limiting catastrophic forgetting. On its own the multifidelity continual learning method shows robust results that limit forgetting across several datasets. Additionally, we show that the multifidelity method can be combined with existing continual learning methods, including replay and memory aware synapses, to further limit catastrophic forgetting. The proposed continual learning method is especially suited for physical problems where the data satisfy the same physical laws on each domain, or for physics-informed neural networks, because in these cases we expect there to be a strong correlation between the output of the previous model and the model on the current training domain.
Funder
Pacific Northwest National Laboratory
Reference68 articles.
1. Continual lifelong learning with neural networks: a review;Parisi;Neural Netw.,2019
2. Rehearsal revealed: the limits and merits of revisiting samples in continual learning;Verwimp,2021
3. Continual learning through synaptic intelligence;Zenke,2017
4. Overcoming catastrophic forgetting in neural networks;Kirkpatrick;Proc. Natl Acad. Sci.,2017
5. Memory aware synapses: learning what (not) to forget;Aljundi,2018