Affiliation:
1. University of California, Los Angeles
Abstract
This article describes the limitations of certain statistical techniques and approaches for handling multiple criteria in assessing reliability, concurrent validity, and generalizability. The article also suggests alternatives to using these approaches on performance assessment measures. The application of the latent variable modeling approach to the data revealed a significant improvement in the degree of interrater reliability, concurrent validity, and generalizability (over raters and topics) on a scoring rubric. The improvement in concurrent validity was particularly noticeable. However, it should be noted that one of the main limitations of this study was the use of a small number of subjects, which could have affected the validity of some of the findings.
Subject
Applied Mathematics,Applied Psychology,Developmental and Educational Psychology,Education
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献