Affiliation:
1. Michigan State University
2. University of Wisconsin-Madison
Abstract
Recent decades have seen a tremendous growth in the development of collusion detection methods, many of which rest on the assumption that examinees who engage in collusion will display unusually similar scores/responses. In this article, we expand the definition of answer similarity to include not only the item scores/responses but also the item response times (RTs). Using detailed simulations and an experimental data set, we show that (a) both the new and existing similarity statistics are able to control the Type I error rate in most of the studied conditions and (b) the new statistics are much more powerful, on average, than the existing statistics at detecting several types of simulated collusion.
Publisher
American Educational Research Association (AERA)