Abstract
AbstractSparse coding, predictive coding and divisive normalization are thought to be underlying principles of neural circuits across the brain, supported by much experimental evidence. However, the connection among these principles is still poorly understood. In this paper, we aim to unify sparse coding, predictive coding, and divisive normalization in the context of a two-layer neural model. This two-layer model is constructed that implements sparse coding with the structure originating from predictive coding. Our results show that a homeostatic function in the model can shape the nonlinearity of neural responses, which can replicate different forms of divisive normalization. We demonstrate that this model is equivalent to divisive normalization in a single-neuron scenario. Our simulation also shows that the model can learn simple cells with the property of contrast saturation that is previously explained by divisive normalization. In summary, our study demonstrates that the three principles of sparse coding, predictive coding, and divisive normalization can be unified, such that the model has the ability to learn as well as display more diverse response nonlinearities. This framework may also be potentially used to explain how the brain learns to integrate input from different sensory modalities.
Publisher
Cold Spring Harbor Laboratory