Affiliation:
1. Albstadt-Sigmaringen University, D-72458 Albstadt-Ebingen, Germany
Abstract
Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively investigated synaptic learning in linear models of the Hopfield type and simple nonlinear models of the Steinbuch/Willshaw type. Optimized Hopfield networks of size n can store a large number of about [Formula: see text] memories of size k (or associations between them) but require real-valued synapses, which are expensive to implement and can store at most [Formula: see text] bits per synapse. Willshaw networks can store a much smaller number of about [Formula: see text] memories but get along with much cheaper binary synapses. Here I present a learning model employing synapses with discrete synaptic weights. For optimal discretization parameters, this model can store, up to a factor [Formula: see text] close to one, the same number of memories as for optimized Hopfield-type learning—for example, [Formula: see text] for binary synapses, [Formula: see text] for 2 bit (four-state) synapses, [Formula: see text] for 3 bit (8-state) synapses, and [Formula: see text] for 4 bit (16-state) synapses. The model also provides the theoretical framework to determine optimal discretization parameters for computer implementations or brainlike parallel hardware including structural plasticity. In particular, as recently shown for the Willshaw network, it is possible to store [Formula: see text] bit per computer bit and up to [Formula: see text] bits per nonsilent synapse, whereas the absolute number of stored memories can be much larger than for the Willshaw model.
Subject
Cognitive Neuroscience,Arts and Humanities (miscellaneous)
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献