Abstract
Humans experience feelings of confidence in their decisions. In perception, these feelings are typically accurate – we tend to feel more confident about correct decisions. The degree of insight people have into the accuracy of their decisions is known as metacognitive sensitivity. Currently popular methods of estimating metacognitive sensitivity are subject to interpretive ambiguities because they assume that humans experience normally-shaped distributions of different experiences when they are repeatedly exposed to a single input. If, however, people have skewed distributions of experiences, or distributions with excess kurtosis (i.e. a distribution containing greater numbers of extreme experiences than is predicted by a normal distribution), calculations can erroneously underestimate metacognitive sensitivity. Here, we describe a means of estimating metacognitive sensitivity that is more robust against violations of the normality assumption. This improved method relies on estimating the precision with which people transition between making categorical decisions with relatively low to high confidence, and on comparing this with the precision with which they transition between making different types of perceptual category decision. The new method can easily be added to standard behavioral experiments. We provide free Matlab code to help researchers implement these analyses and procedures in their own experiments.Public Significance StatementSignal-detection theory is one of the most popular frameworks for analysing data from experiments of human behaviour – including investigations of confidence. The authors demonstrate that if a key assumption of this framework is inadvertently violated, analyses of confidence can lead to unwarranted conclusions. They develop a new and more robust measure of confidence.
Publisher
Cold Spring Harbor Laboratory