Abstract
The single-layer perceptron, introduced by Rosenblatt in 1958, is one of the earliest and simplest neural network models. However, it is incapable of classifying linearly inseparable patterns. A new era of neural network research started in 1986, when the backpropagation (BP) algorithm was rediscovered for training the multilayer perceptron (MLP) model. An MLP with a large number of hidden nodes can function as a universal approximator. To date, the MLP model is the most fundamental and important neural network model. It is also the most investigated neural network model. Even in this AI or deep learning era, the MLP is still among the few most investigated and used neural network models. Numerous new results have been obtained in the past three decades. This survey paper gives a comprehensive and state-of-the-art introduction to the perceptron model, with emphasis on learning, generalization, model selection and fault tolerance. The role of the perceptron model in the deep learning era is also described. This paper provides a concluding survey of perceptron learning, and it covers all the major achievements in the past seven decades. It also serves a tutorial for perceptron learning.
Funder
Hong Kong Research Grants Council
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Reference391 articles.
1. A logical calculus of the ideas immanent in nervous activity;McCulloch;Bull. Mathm. Biophys.,1943
2. The perceptron: A probabilistic model for information storage and organization in the brain;Rosenblatt;Psychol. Rev.,1958
3. Rosenblatt, R. (1962). Principles of Neurodynamics, Spartan Books.
4. Widrow, B., and Hoff, M.E. (1960). IRE Eastern Electronic Show and Convention (WESCON) Record, Part 4, IRE.
5. Minsky, M.L., and Papert, S. (1969). Perceptrons, MIT Press.
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献