Abstract
AbstractThe goal of the paper is to develop and propose a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another dimension is recognized as generalization, the possibility to go over from specific to more general types of problems. A third dimension is semantic grounding. Our overall analysis connects to a number of known foundational issues in the philosophy of mind and cognition: the blockhead objection, the Turing test, the symbol grounding problem, the Chinese room argument, and use theories of meaning. It shall finally be argued that the dimension of grounding decomposes into three sub-dimensions. And the dimension of self-learning turns out as only one of a whole range of “self-x-capacities” (based on ideas of organic computing) that span the self-x-subspace of the full AI state space.
Funder
Otto-von-Guericke-Universität Magdeburg
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Philosophy
Reference52 articles.
1. Bengio, Yoshua, Dong-Hyun Lee, Jörg Bornschein, Thomas Mesnard & Zhouhan Lin (2016): Towards Biologically Plausible Deep Learning. arXiv:1502.04156v3.
2. Block, Ned (1981). Psychologism and Behaviorism. Philosophical Review, 90(1), 5–43.
3. Block, Ned (1998): Semantics, conceptual role. In The Routledge Encylopedia of Philosophy, ed. E. Craig. London: Routledge.
4. Bostrom, Nick (2013). Superintelligence. Paths: Oxford University Press.
5. Botvinick, Matthew M., Ritter, Sam, Wang, Jane X., Kurth-Nelson, Zeb, & Hassabis, Demis (2019). Reinforcement Learning, Fast and Slow. Trends Cognitive Sci, 23(5), 408–422.
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献