Affiliation:
1. Department of Psychology, University of Guelph, Guelph, Ontario, Canada
Abstract
Over the last decade, replication research in the psychological sciences has become more visible. One way that replication research can be conducted is to compare the results of the replication study with the original study to look for consistency, that is to say, to evaluate whether the original study is “replicable.” Unfortunately, many popular and readily accessible methods for ascertaining replicability, such as comparing significance levels across studies or eyeballing confidence intervals, are generally ill suited to the task of comparing results across studies. To address this issue, we present the prediction interval as a statistic that is effective for determining whether a replication study is inconsistent with the original study. We review the statistical rationale for prediction intervals, demonstrate hand calculations, and provide a walkthrough using an R package for obtaining prediction intervals for means, d values, and correlations. To aid the effective adoption of prediction intervals, we provide guidance on the correct interpretation of results when using prediction intervals in replication research.
Funder
social sciences and humanities research council