Affiliation:
1. Department of Language of Science and Technology, Saarland University
2. Department of Computer Science, Saarland University
Abstract
Abstract
Formal probabilistic models, such as the Rational Speech Act model, are widely used for formalizing the reasoning involved in various pragmatic phenomena, and when a model achieves good fit to experimental data, that is interpreted as evidence that the model successfully captures some of the underlying processes. Yet how can we be sure that participants’ performance on the task is the result of successful reasoning and not of some feature of experimental setup? In this study, we carefully manipulate the properties of the stimuli that have been used in several pragmatics studies and elicit participants’ reasoning strategies. We show that certain biases in experimental design inflate participants’ performance on the task. We then repeat the experiment with a new version of stimuli which is less susceptible to the identified biases, obtaining a somewhat smaller effect size and more reliable estimates of individual-level performance.
Funder
European Union’s Horizon 2020 Research and Innovation Programme
Subject
Cognitive Neuroscience,Linguistics and Language,Developmental and Educational Psychology,Experimental and Cognitive Psychology
Reference15 articles.
1. Advanced bayesian multilevel modeling with the R package brms;Bürkner;arXiv:1705.11123,2017
2. Rationalization is rational;Cushman;Behavioral and Brain Sciences,2020
3. Optimal reasoning about referential expressions;Degen,2012
4. Cost-based pragmatic inference about referential expressions;Degen;Proceedings of the Annual Meeting of the Cognitive Science Society,2013
5. Reflections on reflection: the nature and function of type 2 processes in dual-process theories of reasoning;Evans;Thinking & Reasoning,2019