Predicting and reasoning about replicability using structured groups

Author:

Wintle Bonnie C.1ORCID,Smith Eden T.2ORCID,Bush Martin2ORCID,Mody Fallon2ORCID,Wilkinson David P.1ORCID,Hanea Anca M.13ORCID,Marcoci Alexandru4ORCID,Fraser Hannah1ORCID,Hemming Victoria5ORCID,Thorn Felix Singleton6ORCID,McBride Marissa F.17ORCID,Gould Elliot1ORCID,Head Andrew2ORCID,Hamilton Daniel G.2ORCID,Kambouris Steven1ORCID,Rumpff Libby1ORCID,Hoekstra Rink8ORCID,Burgman Mark A.7ORCID,Fidler Fiona12ORCID

Affiliation:

1. MetaMelb Research Initiative, School of Ecosystem and Forest Sciences, University of Melbourne, Parkville 3010, Australia

2. MetaMelb Research Initiative, School of Historical and Philosophical Studies, University of Melbourne, Parkville 3010, Australia

3. Centre of Excellence for Biosecurity Risk Analysis, School of BioSciences, University of Melbourne, Parkville 3010, Australia

4. Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK

5. Martin Conservation Decisions Lab, Department of Forest and Conservation Sciences, University of British Columbia, Vancouver, Canada

6. School of Psychological Sciences, University of Melbourne, Parkville 3010, Australia

7. Centre for Environmental Policy, Imperial College London, London, UK

8. Department of Pedagogical and Educational Sciences, University of Groningen, Groningen, The Netherlands

Abstract

This paper explores judgements about the replicability of social and behavioural sciences research and what drives those judgements. Using a mixed methods approach, it draws on qualitative and quantitative data elicited from groups using a structured approach called the IDEA protocol (‘investigate’, ‘discuss’, ‘estimate’ and ‘aggregate’). Five groups of five people with relevant domain expertise evaluated 25 research claims that were subject to at least one replication study. Participants assessed the probability that each of the 25 research claims would replicate (i.e. that a replication study would find a statistically significant result in the same direction as the original study) and described the reasoning behind those judgements. We quantitatively analysed possible correlates of predictive accuracy, including self-rated expertise and updating of judgements after feedback and discussion. We qualitatively analysed the reasoning data to explore the cues, heuristics and patterns of reasoning used by participants. Participants achieved 84% classification accuracy in predicting replicability. Those who engaged in a greater breadth of reasoning provided more accurate replicability judgements. Some reasons were more commonly invoked by more accurate participants, such as ‘effect size’ and ‘reputation’ (e.g. of the field of research). There was also some evidence of a relationship between statistical literacy and accuracy.

Funder

Defense Advanced Research Projects Agency

Publisher

The Royal Society

Subject

Multidisciplinary

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. When expert predictions fail;Trends in Cognitive Sciences;2023-11

2. Validating a forced-choice method for eliciting quality-of-reasoning judgments;Behavior Research Methods;2023-10-13

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3