Affiliation:
1. University of Melbourne, Australia
2. Iowa State University, USA
Abstract
Argument-based validation requires test developers and researchers to specify what is entailed in test interpretation and use. Doing so has been shown to yield advantages (Chapelle, Enright, & Jamieson, 2010), but it also requires an analysis of how the concerns of language testers can be conceptualized in the terms used to construct a validity argument. This article presents one such analysis by examining how issues associated with the rating of test takers’ linguistic performance can be included in a validity argument. Through a manual search of published language testing research, we gathered examples of research studies investigating the quality of rating processes and products. We then analyzed them in terms of how the research could be framed within a validity argument. Drawing on Kane’s (2001, 2006, 2013) conceptualization of inferences, warrants, and assumptions, we show that the relevance of research about the rating of test performances extends beyond one or two inferences about rater reliability. Such research results, for example, provide backing for assumptions about the correspondence of the rating scale to the test construct (explanation inference) and the context of extrapolation as well as the decisions made based on the ratings and their consequences. Our analysis reveals a picture of the extensive reach of the rating process into many aspects of test score meaning as well as concrete suggestions for integrating rating issues into future argument-based validation studies.
Subject
Linguistics and Language,Social Sciences (miscellaneous),Language and Linguistics
Cited by
59 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献