Abstract
AbstractStrong AI—artificial intelligence that is in all respects at least as intelligent as humans—is still out of reach. Current AI lacks common sense, that is, it is not able to infer, understand, or explain the hidden processes, forces, and causes behind data. Main stream machine learning research on deep artificial neural networks (ANNs) may even be characterized as being behavioristic. In contrast, various sources of evidence from cognitive science suggest that human brains engage in the active development of compositional generative predictive models (CGPMs) from their self-generated sensorimotor experiences. Guided by evolutionarily-shaped inductive learning and information processing biases, they exhibit the tendency to organize the gathered experiences into event-predictive encodings. Meanwhile, they infer and optimize behavior and attention by means of both epistemic- and homeostasis-oriented drives. I argue that AI research should set a stronger focus on learning CGPMs of the hidden causes that lead to the registered observations. Endowed with suitable information-processing biases, AI may develop that will be able to explain the reality it is confronted with, reason about it, and find adaptive solutions, making it Strong AI. Seeing that such Strong AI can be equipped with a mental capacity and computational resources that exceed those of humans, the resulting system may have the potential to guide our knowledge, technology, and policies into sustainable directions. Clearly, though, Strong AI may also be used to manipulate us even more. Thus, it will be on us to put good, far-reaching and long-term, homeostasis-oriented purpose into these machines.
Funder
Deutsche Forschungsgemeinschaft
Alexander von Humboldt-Stiftung
Eberhard Karls Universität Tübingen
Publisher
Springer Science and Business Media LLC
Reference119 articles.
1. Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: 3rd international conference on learning representations, ICLR 2015 (2015). ArXiv:1409.0473
2. Baker CL, Jara-Ettinger J, Saxe R, Tenenbaum JB (2017) Rational quantitative attribution of beliefs, desires and percepts in human mentalizing. Nat Hum Behav 1(4):0064. https://doi.org/10.1038/s41562-017-0064
3. Baker CL, Saxe R, Tenenbaum JB (2009) Action understanding as inverse planning. Cognition 113(3):329–349. https://doi.org/10.1016/j.cognition.2009.07.005
4. Baldwin DA, Kosie JE (2020) How does the mind render streaming experience as events? Topics in Cognitive Science. https://doi.org/10.1111/tops.12502
5. Barsalou LW (1999) Perceptual symbol systems. Behav Brain Sci 22:577–600
Cited by
19 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献