Abstract
A frequently encountered security issue in writing tests is nonauthentic text submission: Test takers submit texts that are not their own but rather are copies of texts prepared by someone else. In this report, we propose AutoESD, a human‐in‐the‐loop and automated system to detect nonauthentic texts for a large‐scale writing tests, and report its performance on an operational data set. The AutoESD system utilizes multiple automated text similarity measures to identify suspect texts and provides an analytics‐enhanced web application to help human experts review the identified texts. To evaluate the performance of AutoESD, we obtained its similarity measures on TOEFL iBT® test writing responses collected from multiple remote administrations and examined their distributions. The results were highly encouraging in that the distributional characteristics of AutoESD similarity measures were effective in identifying suspect texts and the measures could be computed quickly without affecting the operational score turnaround timeline.
Reference34 articles.
1. Understanding Plagiarism Linguistic Patterns, Textual Features, and Detection Methods
2. Automated essay scoring with e‐rater® v. 2;Attali Y.;Journal of Technology, Learning, and Assessment,2006
3. Bachmann M.(2021 October).Maxbachmann/RapidFuzz: Release 1.8.0[Software package]. Zenodo.https://doi.org/10.5281/zenodo.5584996
4. Using intelligent feedback to improve sourcing and integration in students' essays;Britt M. A.;International Journal of Artificial Intelligence in Education,2004
5. Choi I. Hao J. Deane P. &Zhang M.(2021).Benchmark keystroke biometrics accuracy from high‐stakes writing tasks(Research Report No. RR‐21‐15). ETS.https://doi.org/10.1002/ets2.12326