Abstract
AbstractOne of the key problems the brain faces is inferring the state of the world from a sequence of dynamically changing stimuli, and it is not yet clear how the sensory system achieves this task. A well-established computational framework for describing perceptual processes in the brain is provided by the theory of predictive coding. Although the original proposals of predictive coding have discussed temporal prediction, later work developing this theory mostly focused on static stimuli, and key questions on neural implementation and computational properties of temporal predictive coding networks remain open. Here, we address these questions and present a formulation of the temporal predictive coding model that can be naturally implemented in recurrent networks, in which activity dynamics rely only on local inputs to the neurons, and learning only utilises local Hebbian plasticity. Additionally, we show that predictive coding networks can approximate the performance of the Kalman filter in predicting behaviour of linear systems, and behave as a variant of a Kalman filter which does not track its own subjective posterior variance. Importantly, predictive coding networks can achieve similar accuracy as the Kalman filter without performing complex mathematical operations, but just employing simple computations that can be implemented by biological networks. Moreover, we demonstrate how the model can be effectively generalized to non-linear systems. Overall, models presented in this paper show how biologically plausible circuits can predict future stimuli and may guide research on understanding specific neural circuits in brain areas involved in temporal prediction.Author summaryWhile significant advances have been made in the neuroscience of how the brain processes static stimuli, the time dimension has often been relatively neglected. However, time is crucial since the stimuli perceived by our senses typically dynamically vary in time, and the cortex needs to make sense of these changing inputs. This paper describes a computational model of cortical networks processing temporal stimuli. This model is able to infer and track the state of the environment based on noisy inputs, and predict future sensory stimuli. By ensuring that these predictions match the following stimuli, the model is able to learn the structure and statistics of its temporal inputs. The model may help in further understanding neural circuits in sensory cortical areas.
Publisher
Cold Spring Harbor Laboratory
Reference82 articles.
1. Repetition suppression and its contextual determinants in predictive coding
2. Canonical Microcircuits for Predictive Coding
3. Beal, M. J. , et al. (2003). Variational algorithms for approximate bayesian inference. university of London London.
4. Probabilistic population codes and the exponential family of distributions
5. Bellec, G. , Scherr, F. , Subramoney, A. , Hajek, E. , Salaj, D. , Legenstein, R. , & Maass, W . (2020). A solution to the learning dilemma for recurrent networks of spiking neurons. bioRxiv, 738385.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献