Affiliation:
1. Seabrook Island, SC
2. Johns Hopkins University
Abstract
This article offers an alternative methodology for practitioners and researchers to use in establishing interrater reliability for testing purposes. The majority of studies on interrater reliability use a traditional methodology where by two raters are compared using a Pearson product-moment correlation. This traditional method of estimating interrater reliability uses a correlation of the two raters' scores for the same subject. This study used an observer-rater paradigm in which 20 observers' scores were compared to the scores of an expert rater. Results bring into question the common practice of examining interrater reliability. The correlation of two raters may not be a reliable way of determining if every evaluator using the instrument will obtain scores that accurately report the developmental status of young children.
Subject
General Health Professions,Developmental and Educational Psychology,Education
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献