Abstract
Deep learning and other similar machine learning techniques have a huge advantage over other AI methods: they do function when applied to real-world data, ideally from scratch, without human intervention. However, they have several shortcomings that mere quantitative progress is unlikely to overcome. The paper analyses these shortcomings as resulting from the type of compression achieved by these techniques, which is limited to statistical compression. Two directions for qualitative improvement, inspired by comparison with cognitive processes, are proposed here, in the form of two mechanisms: complexity drop and contrast. These mechanisms are supposed to operate dynamically and not through pre-processing as in neural networks. Their introduction may bring the functioning of AI away from mere reflex and closer to reflection.
Reference52 articles.
1. Speculations concerning the first ultraintelligent machine;Good;Adv. Comput.,1966
2. Le Mythe de la singularité—Faut-il craindre l’intelligence artificielle?;Ganascia,2017
3. Minds, brains, and programs
4. Mind over Machine;Dreyfus,1986