Author:
Carta Antonio,Cossu Andrea,Errica Federico,Bacciu Davide
Abstract
In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario. The primary objective of the analysis is to understand whether classical continual learning techniques for flat and sequential data have a tangible impact on performances when applied to graph data. To do so, we experiment with a structure-agnostic model and a deep graph network in a robust and controlled environment on three different datasets. The benchmark is complemented by an investigation on the effect of structure-preserving regularization techniques on catastrophic forgetting. We find that replay is the most effective strategy in so far, which also benefits the most from the use of regularization. Our findings suggest interesting future research at the intersection of the continual and graph representation learning fields. Finally, we provide researchers with a flexible software framework to reproduce our results and carry out further experiments.
Reference51 articles.
1. “Online continual learning with maximal interfered retrieval,”;Aljundi,2019
2. A gentle introduction to deep learning for graphs;Bacciu;Neural Netw,2020
3. Relational inductive biases, deep learning, and graph networks;Battaglia;arXiv [Preprint].,2018
4. Recommender systems survey;Bobadilla;Knowl. Based Syst,2013
5. Geometric deep learning: going beyond Euclidean data;Bronstein;IEEE Signal Process. Mag,2017
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Face Template Protection Through Incremental Learning and Error-Correcting Codes;2022 7th International Conference on Signal and Image Processing (ICSIP);2022-07-20