Abstract
The Rosenblatt’s first theorem about the omnipotence of shallow networks states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set. Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded number of connections or a relatively small diameter of the receptive field for each neuron at the hidden layer. They proved that under these constraints, an elementary perceptron cannot solve some problems, such as the connectivity of input images or the parity of pixels in them. In this note, we demonstrated Rosenblatt’s first theorem at work, showed how an elementary perceptron can solve a version of the travel maze problem, and analysed the complexity of that solution. We also constructed a deep network algorithm for the same problem. It is much more efficient. The shallow network uses an exponentially large number of neurons on the hidden layer (Rosenblatt’s A-elements), whereas for the deep network, the second-order polynomial complexity is sufficient. We demonstrated that for the same complex problem, the deep network can be much smaller and reveal a heuristic behind this effect.
Funder
Ministry of Science and Higher Education of the Russian Federation
Subject
General Physics and Astronomy
Reference22 articles.
1. Rosenblatt, F. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, 1962.
2. A review of feature selection and its methods;Venkatesh;Cybern. Inf. Technol.,2019
3. Approaches to multi-objective feature selection: A systematic literature review;Al-Tashi;IEEE Access,2020
4. Feature selection and its use in big data: Challenges, methods, and trends;Rong;IEEE Access,2019
5. Minsky, M., and Papert, S. Perceptrons, 1988.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. The Comparison of Neural Networks for Contactless BPM Estimation;2024 34th International Conference Radioelektronika (RADIOELEKTRONIKA);2024-04-17
2. The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning;Artificial Neural Networks and Machine Learning – ICANN 2023;2023