Abstract
AbstractAlthough the evaluation of inter-rater agreement is often necessary in psychometric procedures (e.g., standard settings or assessment centers), the measures typically used are not unproblematic. Existing measures are known for penalizing raters in specific settings, and some of them are highly dependent on the marginals and should not be used in ranking settings. This article introduces a new approach using the probability of consistencies in a setting where n independent raters rank k items. The discrete theoretical probability distribution of the sum of the pairwise absolute row differences (PARDs) is used to evaluate inter-rater agreement of empirically retrieved rating results. This is done by calculating the sum of PARDs in an empirically obtained $$n\times k$$
n
×
k
matrix together with the theoretically expected distribution of the sum of PARDs assuming raters randomly ranking items. In this article, the theoretical considerations of the PARDs approach are presented and two first simulation studies are used to investigate the performance of the approach.
Publisher
Springer Science and Business Media LLC
Subject
Statistics and Probability
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献