Affiliation:
1. Law School Admission Council, Newtown, PA, USA
Abstract
In standardized multiple-choice testing, examinees may change their answers for various reasons. The statistical analysis of answer changes (ACs) has uncovered multiple testing irregularities on large-scale assessments and is now routinely performed at many testing organizations. This article exploits a recent approach where the information about all previous answers is used only to partition administered items into two disjoint subtests: items where an AC occurred and items where an AC did not occur. Two optimal statistics are described, each measuring a difference in performance between these subtests, where the performance is estimated from the final responses. Answer-changing behavior was simulated, where realistic distributions of wrong-to-right, wrong-to-wrong, and right-to-wrong ACs were achieved under various conditions controlled by the following independent variables: type of test, amount of aberrancy, and amount of uncertainty. Results of computer simulations confirmed the theoretical constructs on the optimal power of both statistics and provided several recommendations for practitioners.
Subject
Psychology (miscellaneous),Social Sciences (miscellaneous)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献