Abstract
Interacting many-body physical systems ranging from neural networks in the brain to folding proteins to self-modifying electrical circuits can learn to perform specific tasks. This learning, both in nature and in engineered systems, can occur through evolutionary selection or through dynamical rules that drive active learning from experience. Here, we show that learning leaves architectural imprints on the Hessian of a physical system. Compared to a generic organization of the system components, (a) the effective physical dimension of the response to inputs (the participation ratio of low-eigenvalue modes) decreases, (b) the response of physical degrees of freedom to random perturbations (or system “susceptibility”) increases, and (c) the low-eigenvalue eigenvectors of the Hessian align with the task. Overall, these effects suggest a method for discovering the task that a physical network may have been trained for.
Publisher
Cold Spring Harbor Laboratory
Reference65 articles.
1. Deep learning
2. Pankaj Mehta , Marin Bukov , Ching-Hao Wang , Alexandre GR Day , Clint Richardson , Charles K Fisher , and David J Schwab . A high-bias, low-variance introduction to machine learning for physicists. Physics reports, 2019.
3. Ernesto De Vito , Lorenzo Rosasco , Andrea Caponnetto , Umberto De Giovannini , Francesca Odone , and Peter Bartlett . Learning from examples as an inverse problem. Journal of Machine Learning Research, 6(5), 2005.
4. Nan Ye , Farbod Roosta-Khorasani , and Tiangang Cui . Optimization methods for inverse problems. In 2017 MA-TRIX Annals, pages 121–140. Springer, 2019.
5. A deep learning framework for neuroscience
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献