Author:
Darshan Ran,Rivkind Alexander
Abstract
Animals must monitor continuous variables such as position or head direction. Manifold attractor networks—which enable a continuum of persistent neuronal states—provide a key framework to explain this monitoring ability. Neural networks with symmetric synaptic connectivity dominate this framework, but are inconsistent with the diverse synaptic connectivity and neuronal representations observed in experiments. Here, we developed a theory for manifold attractors in trained neural networks, which approximate a continuum of persistent states, without assuming unrealistic symmetry. We exploit the theory to predict how asymmetries in the representation and heterogeneity in the connectivity affect the formation of the manifold via training, shape network response to stimulus, and govern mechanisms that possibly lead to destabilization of the manifold. Our work suggests that the functional properties of manifold attractors in the brain can be inferred from the overlooked asymmetries in connectivity and in the low-dimensional representation of the encoded variable.
Publisher
Cold Spring Harbor Laboratory
Reference66 articles.
1. Properties of networks with partially structured and partially random connectivity;Physical Review E,2015
2. What regularized auto-encoders learn from the data-generating distribution;The Journal of Machine Learning Research,2014
3. Dynamics of pattern formation in lateral-inhibition type neural fields
4. Amit, D. J. (1992). Modeling brain function: The world of attractor neural networks. Cambridge university press.
5. From fixed points to chaos: Three models of delayed discrimination
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献