Abstract
AbstractAs AI systems become increasingly competent language users, it is an apt moment to consider what it would take for machines to understand human languages. This paper considers whether either language models such as GPT-3 or chatbots might be able to understand language, focusing on the question of whether they could possess the relevant concepts. A significant obstacle is that systems of both kinds interact with the world only through text, and thus seem ill-suited to understanding utterances concerning the concrete objects and properties which human language often describes. Language models cannot understand human languages because they perform only linguistic tasks, and therefore cannot represent such objects and properties. However, chatbots may perform tasks concerning the non-linguistic world, so they are better candidates for understanding. Chatbots can also possess the concepts necessary to understand human languages, despite their lack of perceptual contact with the world, due to the language-mediated concept-sharing described by social externalism about mental content.
Publisher
Springer Science and Business Media LLC
Reference52 articles.
1. Ball, D. (2009). There are no phenomenal concepts. Mind, 118(472), 935–962.
2. Beck, J. (2012). Do animals engage in conceptual thought? Philosophy Compass, 7(3), 218–229.
3. Block, N. (2014). Seeing-as in the light of vision science. Philosophy and Phenomenological Research, 89(3), 560–572.
4. Brown, T., et al. (2020). Language models are few-shot learners. arXiv:2005.14165.
5. Brown, J. (2000). Critical reasoning, understanding and self-knowledge. Philosophy and Phenomenological Research, 61(3), 659–676.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献