Affiliation:
1. Rappaport Faculty of Medicine, Technion - Israel Institute of Technology
2. Network Biology Research Laboratory, Technion - Israel Institute of Technology
Abstract
Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.
Funder
Israel Science Foundation
German-Israeli Foundation for Scientific Research and Development
US-Israel Binational Science Foundation
Human Frontier Science Program
Rappaport Institute Collaborative research grant
Israel PBC-VATAT and by the Technion Center for Machine Learning and Intelligent Systems
Publisher
eLife Sciences Publications, Ltd
Reference57 articles.
1. The geometry of representational drift in natural and artificial neural networks;Aitken;PLOS Computational Biology,2022
2. Driftreg;Aviv-Ratzon,2024
3. Towards Biologically Plausible Deep Learning;Bengio,2015
4. Implicit regularization for deep neural networks driven by an ornstein-uhlenbeck like process;Blanc,2020
5. Is coding a relevant metaphor for the brain;Brette;Behavioral and Brain Sciences,2019
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献