Author:
Krotov Dmitry,Hopfield John J.
Abstract
It is widely believed that end-to-end training with the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb’s idea that change of the synapse strength should be local—i.e., should depend only on the activities of the pre- and postsynaptic neurons. We design a learning algorithm that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way. These learned lower-layer feature detectors can be used to train higher-layer weights in a usual supervised way so that the performance of the full network is comparable to the performance of standard feedforward networks trained end-to-end with a backpropagation algorithm on simple tasks.
Publisher
Proceedings of the National Academy of Sciences
Reference41 articles.
1. Deep learning
2. Zeiler MD Fergus R (2014) Visualizing and understanding convolutional networks. European Conference on Computer Vision, eds Fleet D Pajdla T Schiele B Tuytelaars T (Springer, Cham, Switzerland), pp 818–833.
3. Reducing the Dimensionality of Data with Neural Networks
4. Lillicrap TP Cownden D Tweed DB Akerman CJ (2014) Random feedback weights support learning in deep neural networks. arXiv:1411.0247. Preprint, posted November 2, 2014.
5. Random synaptic feedback weights support error backpropagation for deep learning
Cited by
98 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献