Abstract
AbstractThis article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions need to be autonomous organisms while LLMs are heteronomous mechanisms. To conclude, the article argues, based on structural aspects of transformer-based LLMs, that these LLMs have taken a first step away from mechanistic artificiality to autonomous self-constitution, which means that these models are (slowly) moving into a direction that someday might result in non-human, but equally non-artificial agents, thus subverting the time-honored Kantian distinction between organism and mechanism.
Publisher
Springer Science and Business Media LLC
Reference79 articles.
1. Amaya, S. (2018). Two kinds of intentions: a new defense of the Simple View. Philosophical Studies, 175(7), 1767–1786.
2. Armstrong, D. M. (1971). Meaning and communication. The Philosophical Review, 80(4), 427–447.
3. Austin, J. (1962). How to do things with words. Clarendon Press.
4. Bahdanau, D., Cho, K., & Bengio Y. (2014). “Neural Machine Translation by Jointly Learning to Align and Translate”. In: arXiv:1409.0473
5. Barandiaran, X. E., Di Paolo, E., & Rohde, M. (2009). “Defining Agency: Individuality, Normativity, Asymmetry, and Spatio-temporality in Action”. In: Adaptive Behavior 17.5, pp. 367–386. ISSN: 1059-7123, 1741-2633. https://doi.org/10.1177/1059712309343819. http://journals.sagepub.com/doi/10.1177/1059712309343819 (visited on 05/12/2023)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献