Abstract
For an assessment with multiple sections measuring related constructs, test takers with higher scores on one section are expected to perform better on the related sections. When the sections involve different test designs, test takers with preknowledge of an administration may score unusually high on some sections but not on others. To address such inconsistency, regression approaches have been successfully applied to compare section scores for many years in operational settings. With a focus on outlier analysis, we propose a new two-stage regression approach to detecting score inconsistency among different sections of a test. It is designed to leverage rich historical information from large-scale assessments to help detect unusually high scores on the easier-to-cheat sections based on the scores on the harder-to-cheat sections in new administrations. This paper presents a statistical framework for the two-stage regression procedure and develops analytical results under a null model of no exposure. It also describes an analysis procedure to guide applications. An empirical example is provided to illustrate the proposed method, to evaluate the performance and robustness of the analytical results in real settings, and to compare with two other methods for the detection of inconsistent section scores.
Publisher
American Educational Research Association (AERA)