Abstract
AbstractA key aspect of metacognition is metacognitive accuracy, i.e., the degree to which confidence judgments differentiate between correct and incorrect trials. To quantify metacognitive accuracy, researchers are faced with an increasing number of different methods. The present study investigated false positive rates associated with various measures of metacognitive accuracy by hierarchical resampling from the confidence database to accurately represent the statistical properties of confidence judgements. We found that most measures based on the computation of summary-statistics separately for each participant and subsequent group-level analysis performed adequately in terms of false positive rate, including gamma correlations, meta-d′, and the area under type 2 ROC curves. Meta-d′/d′ is associated with a false positive rate even below 5%, but log-transformed meta-d′/d′ performs adequately. The false positive rate of HMeta-d depends on the study design and on prior specification: For group designs, the false positive rate is above 5% when independent priors are placed on both groups, but the false positive rate is adequate when a prior was placed on the difference between groups. For continuous predictor variables, default priors resulted in a false positive rate below 5%, but the false positive rate was not distinguishable from 5% when close-to-flat priors were used. Logistic mixed model regression analysis is associated with dramatically inflated false positive rates when random slopes are omitted from model specification. In general, we argue that no measure of metacognitive accuracy should be used unless the false positive rate has been demonstrated to be adequate.
Funder
Deutsche Forschungsgemeinschaft
Katholische Universität Eichstätt-Ingolstadt
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献