Affiliation:
1. Decision Sciences, IQVIA, Durham, NC, USA
2. Immunology, Eli Lilly, Indianapolis, IN, USA
Abstract
Background: Cohen's kappa is a statistic that estimates interobserver agreement. It was originally introduced to help develop diagnostic tests. Interpretative readings of 2 observers, for example, of a mammogram or other imaging, were compared at a single point in time. It is known that kappa depends on the prevalence of disease and that, therefore, kappas across different settings are hard to compare. Methods: Using simulation, we examine an analogous situation, not previously described, that occurs in clinical trials where sequential measurements are obtained to evaluate disease progression or clinical improvement over time. Results: We show that weighted kappa, used for multilevel outcomes, changes during the trial even if we keep the performance of the observer constant. Conclusions: Kappa and closely related measures can therefore only be used with great difficulty, if at all, in quality assurance in clinical trials.
Publisher
Springer Science and Business Media LLC
Subject
Pharmacology (medical),Public Health, Environmental and Occupational Health,Pharmacology, Toxicology and Pharmaceutics (miscellaneous)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献