Author:
Scheler Gabriele,Schumann Johann
Abstract
AbstractThe issue of memory is difficult for standard neural network models. Ubiquitous synaptic plasticity introduces the problem of interference, which limits pattern recall and introduces conflation errors. We present a lognormal recurrent neural network, load patterns into it (MNIST), and test the resulting neural representation for information content by an output classifier. We identify neurons, which ‘compress’ the pattern information into their own adjacency network, and by stimulating these achieve recall. Learning is limited to intrinsic plasticity and output synapses of these pattern neurons (localist plasticity), which prevents interference.Our first experiments show that this form of storage and recall is possible, with the caveat of a ‘lossy’ recall similar to human memory. Comparing our results with a standard Gaussian network model, we notice that this effect breaks down for the Gaussian model.
Publisher
Cold Spring Harbor Laboratory
Reference11 articles.
1. M. Abadi , et al.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015), http://tensorflow.org/
2. F. Chollet , et al.: Keras. https://keras.io (2015)
3. Which Model to Use for Cortical Spiking Neurons?
4. Spike-timing Dynamics of Neuronal Groups
5. H. Jaeger : Adaptive nonlinear system identification with echo state networks. In Advances in Neural Information Processing Systems (NIPS) 15, MIT Press, pp. 593–600 (2003)