Abstract
“Deep learning” uses Post-Selection—selection of a model after training multiple models using data. The performance data of “Seep Learning” have been deceptively inflated due to two misconducts: 1: cheating in the absence of a test; 2: hiding bad-looking data. Through the same misconducts, a simple method Pure-Guess Nearest Neighbor (PGNN) gives no errors on any validation dataset V, as long as V is in the possession of the authors and both the amount of storage space and the time of training are finite but unbounded. The misconducts are fatal, because “Deep Learning” is not generalizable, by overfitting a sample set V. The charges here are applicable to all learning modes. This chapter proposes new AI metrics, called developmental errors for all networks trained, under four Learning Conditions: (1) a body including sensors and effectors, (2) an incremental learning architecture (due to the “big data” flaw), (3) a training experience, and (4) a limited amount of computational resources. Developmental Networks avoid Deep Learning misconduct because they train a sole system, which automatically discovers context rules on the fly by generating emergent Turing machines that are optimal in the sense of maximum likelihood across a lifetime, conditioned on the four Learning Conditions.
Reference105 articles.
1. Montfort N. Twisty Little Passages: An Approach to Interactive Fiction. Cambridge, MA: MIT Press; 2005
2. Turing AM. Computing machinery and intelligence. Mind. 1950;59:433-460
3. Weng J. Symbolic models and emergent models: A review. IEEE Transactions on Autonomous Mental Development. 2012;4(1):29-53
4. Russell S, Norvig P. Artificial Intelligence: A Modern Approach. 3rd ed. Upper Saddle River, New Jersey: Prentice-Hall; 2010
5. Minsky M. Logical versus analogical or symbolic versus connectionist or neat versus scruffy. AI Magazine. 1991;12(2):34-51