Abstract
SummaryClassical models of efficient coding in neurons assume simple mean responses—‘tuning curves’—such as bellshaped or monotonic functions of a stimulus feature. Real neurons, however, can be more complex: grid cells, for example, exhibit periodic responses which impart the neural population code with high accuracy. But do highly accurate codes require fine tuning of the response properties? We address this question with the use of a benchmark model: a neural network with random synaptic weights which result in output cells with irregular tuning curves. Irregularity enhances the local resolution of the code but gives rise to catastrophic, global errors. For optimal smoothness of the tuning curves, when local and global errors balance out, the neural network compresses information from a high-dimensional representation to a low-dimensional one, and the resulting distributed code achieves exponential accuracy. An analysis of recordings from monkey motor cortex points to such ‘compressed efficient coding’. Efficient codes do not require a finely tuned design—they emerge robustly from irregularity or randomness.
Publisher
Cold Spring Harbor Laboratory
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献