Abstract
AbstractAnimals learn and form memories by jointly adjusting the efficacy of their synapses. How they efficiently solve the underlying temporal credit assignment problem remains elusive. Here, we re-analyze the mathematical basis of gradient descent learning in recurrent spiking neural networks (RSNNs) in light of the recent single-cell transcriptomic evidence for cell-type-specific local neuropeptide signaling in the cortex. Our normative theory posits an important role for the notion of neuronal cell types and local diffusive communication by enabling biologically plausible and efficient weight update. While obeying fundamental biological constraints, including separating excitatory vs inhibitory cell types and observing connection sparsity, we trained RSNNs for temporal credit assignment tasks spanning seconds and observed that the inclusion of local modulatory signaling improved learning efficiency. Our learning rule puts forth a novel form of interaction between modulatory signals and synaptic transmission. Moreover, it suggests a computationally efficient learning method for bio-inspired artificial intelligence.
Publisher
Cold Spring Harbor Laboratory
Reference100 articles.
1. Steps toward artificial intelligence;In: Proceedings of the IRE,1961
2. Control of synaptic plasticity in deep cortical networks
3. Nan Rosemary Ke , Anirudh Goyal Alias Parth GOYAL , Olexa Bilaniuk , Jonathan Binas , Michael C Mozer , Chris Pal , and Yoshua Bengio . “Sparse attentive backtracking: Temporal credit assignment through reminding”. In: Advances in neural information processing systems. 2018, pp. 7640–7651.
4. Dendritic solutions to the credit assignment problem
5. Deep learning
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献