Affiliation:
1. Department of Computer Science, University College London, London, UK
2. Department of Computer Science and Engineering, University of Bologna, Bologna, Italy
Abstract
Do large language models (LLMs) display rational reasoning? LLMs have been shown to contain human biases due to the data they have been trained on; whether this is reflected in rational reasoning remains less clear. In this paper, we answer this question by evaluating seven language models using tasks from the cognitive psychology literature. We find that, like humans, LLMs display irrationality in these tasks. However, the way this irrationality is displayed does not reflect that shown by humans. When incorrect answers are given by LLMs to these tasks, they are often incorrect in ways that differ from human-like biases. On top of this, the LLMs reveal an additional layer of irrationality in the significant inconsistency of the responses. Aside from the experimental results, this paper seeks to make a methodological contribution by showing how we can assess and compare different capabilities of these types of models, in this case with respect to rational reasoning.
Reference52 articles.
1. Language Model Behavior: A Comprehensive Survey
2. Russell S . 2016 Rationality and intelligence: a brief update. In Fundamental Issues of Artificial Intelligence (ed. V Müller) pp. 7–28. New York NY: Springer.
3. Macmillan-Scott O Musolesi M . 2023 (Ir)rationality in AI: State of the Art Research Challenges and Open Questions. (http://arxiv.org/abs/2311.17165).
4. Stein E . 1996 Without good reason: the rationality debate in philosophy and cognitive science. Oxford, UK: Clarendon Press.
5. Subjective probability: A judgment of representativeness
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献