Affiliation:
1. Duke University School of Medicine
2. University of Nebraska-Lincoln
3. University of Delaware
Abstract
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students ( n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3–5 drawn from a larger study. Students wrote six essays across three genres. All essays were hand-scored by four raters and an AES system called Project Essay Grade (PEG). Both scoring methods were highly reliable, but PEG was more reliable for non-struggling students, while hand-scoring was more reliable for struggling students. We provide recommendations regarding ways of optimizing writing assessment and blending hand-scoring with AES.
Publisher
American Educational Research Association (AERA)
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献