Affiliation:
1. Department of Biostatistics Boston University School of Public Health Boston Massachusetts USA
2. Institute for Clinical Research and Health Policy Studies Tufts Medical Center Boston Massachusetts USA
3. Tufts Clinical and Translational Science Institute Tufts University Boston Massachusetts USA
Abstract
AbstractWhen comparing the performance of two or more competing tests, simulation studies commonly focus on statistical power. However, if the size of the tests being compared are either different from one another or from the nominal size, comparing tests based on power alone may be misleading. By analogy with diagnostic accuracy studies, we introduce relative positive and negative likelihood ratios to factor in both power and size in the comparison of multiple tests. We derive sample size formulas for a comparative simulation study. As an example, we compared the performance of six statistical tests for small‐study effects in meta‐analyses of randomized controlled trials: Begg's rank correlation, Egger's regression, Schwarzer's method for sparse data, the trim‐and‐fill method, the arcsine‐Thompson test, and Lin and Chu's combined test. We illustrate that comparing power alone, or power adjusted or penalized for size, can be misleading, and how the proposed likelihood ratio approach enables accurate comparison of the trade‐off between power and size between competing tests.
Subject
Statistics, Probability and Uncertainty,General Medicine,Statistics and Probability
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献