Affiliation:
1. University of Texas at Austin
Abstract
Previously proposed methods for calculating the kappa measure of nominal rating agreement among multiple raters are not applicable in many situations. This paper presents a more general computational method which can be used across a broader range of rating designs, including those in which raters vary with respect to their base rates and pairs of raters vary with respect to how many subjects they rate in common. A Monte Carlo method for determining the statistical significance of this generalized kappa coefficient is discussed.
Subject
Applied Mathematics,Applied Psychology,Developmental and Educational Psychology,Education
Cited by
74 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献