Systematizing Confidence in Open Research and Evidence (SCORE)
Author:
Alipourfard NazaninORCID, Arendt Beatrix, Benjamin Daniel M., Benkler Noam, Bishop Michael MetcalfORCID, Burstein Mark, Bush Martin, Caverlee James, Chen Yiling, Clark Chae, Dreber Anna, Errington Timothy M.ORCID, Fidler Fiona, Fox Nicholas William, Frank AaronORCID, Fraser Hannah, Friedman Scott, Gelman Ben, Gentile James, Giles C Lee, Gordon Michael B, Gordon-Sarney Reed, Griffin ChristopherORCID, Gulden Timothy, Hahn Krystal, Hartman Robert, Holzmeister FelixORCID, Hu Xia Ben, Johannesson Magnus, Kezar Lee, Kline Struhl Melissa, Kuter Ugur, Kwasnica Anthony M., Lee Dong-Ho, Lerman KristinaORCID, Liu Yang, Loomas Zachary, Luis Brianna, Magnusson Ian, Miske Olivia, Mody Fallon, Morstatter Fred, Nosek Brian A.ORCID, Parsons Elan Simon, Pennock David, Pfeiffer ThomasORCID, Pujara Jay, Rajtmajer Sarah, Ren XiangORCID, Salinas Abel, Selvam Ravi Kiran, Shipman Frank, Silverstein Priya, Sprenger Amber, Squicciarini Anna Ms, Stratman Steve, Sun Kexuan, Tikoo Saatvik, Twardy Charles RichardORCID, Tyner Andrew, Viganola Domenico, Wang JuntaoORCID, Wilkinson David PeterORCID, Wintle Bonnie, Wu JianORCID
Abstract
Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts. Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment. The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates. Beyond the primary research objective, the data and artifacts generated from this program will be openly shared and provide an unprecedented opportunity to examine research credibility and evidence.
Publisher
Center for Open Science
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|