Abstract
Practitioners and policymakers rely on meta-analyses to inform decision making around the allocation of resources to individuals and organizations. It is therefore paramount to consider the validity of these results. A well-documented threat to the validity of research synthesis results is the presence of publication bias, a phenomenon where studies with large and/or statistically significant effects, relative to studies with small or null effects, are more likely to be published. We investigated this phenomenon empirically by reviewing meta-analyses published in top-tier journals between 1986 and 2013 that quantified the difference between effect sizes from published and unpublished research. We reviewed 383 meta-analyses of which 81 had sufficient information to calculate an effect size. Results indicated that published studies yielded larger effect sizes than those from unpublished studies ([Formula: see text] = 0.18, 95% confidence interval [0.10, 0.25]). Moderator analyses revealed that the difference was larger in meta-analyses that included a wide range of unpublished literature. We conclude that intervention researchers require continued support to publish null findings and that meta-analyses should include unpublished studies to mitigate the potential bias from publication status.
Publisher
American Educational Research Association (AERA)
Cited by
185 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献