Affiliation:
1. Department of Cognitive Science, UC San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
Abstract
Abstract
We address a growing debate about the extent to which large language models (LLMs) produce behavior consistent with Theory of Mind (ToM) in humans. We present EPITOME: a battery of six experiments that tap diverse ToM capacities, including belief attribution, emotional inference, and pragmatic reasoning. We elicit a performance baseline from human participants for each task. We use the dataset to ask whether distributional linguistic information learned by LLMs is sufficient to explain ToM in humans. We compare performance of five LLMs to a baseline of responses from human comprehenders. Results are mixed. LLMs display considerable sensitivity to mental states and match human performance in several tasks. Yet, they commit systematic errors in others, especially those requiring pragmatic reasoning on the basis of mental state information. Such uneven performance indicates that human-level ToM may require resources beyond distributional information.
Reference74 articles.
1. What is “theory of mind”? Concepts, cognitive processes and individual differences;Apperly;Quarterly Journal of Experimental Psychology,2012
2. Systematic review and inventory of theory of mind measures for young children;Beaudoin;Frontiers in Psychology,2020
3. Growing up blind does not change the neural bases of Theory of Mind;Bedny;Proceedings of the National Academy of Sciences,2009
4. Climbing towards NLU: On meaning, form, and understanding in the age of data;Bender,2020
5. Why talk about mental states? The significance of children’s conversations with friends, siblings, and mothers;Brown;Child Development,1996