Cheating Automatic Short Answer Grading with the Adversarial Usage of Adjectives and Adverbs

Author:

Filighera AnnaORCID,Ochs Sebastian,Steuer TimORCID,Tregel ThomasORCID

Abstract

AbstractAutomatic grading models are valued for the time and effort saved during the instruction of large student bodies. Especially with the increasing digitization of education and interest in large-scale standardized testing, the popularity of automatic grading has risen to the point where commercial solutions are widely available and used. However, for short answer formats, automatic grading is challenging due to natural language ambiguity and versatility. While automatic short answer grading models are beginning to compare to human performance on some datasets, their robustness, especially to adversarially manipulated data, is questionable. Exploitable vulnerabilities in grading models can have far-reaching consequences ranging from cheating students receiving undeserved credit to undermining automatic grading altogether—even when most predictions are valid. In this paper, we devise a black-box adversarial attack tailored to the educational short answer grading scenario to investigate the grading models’ robustness. In our attack, we insert adjectives and adverbs into natural places of incorrect student answers, fooling the model into predicting them as correct. We observed a loss of prediction accuracy between 10 and 22 percentage points using the state-of-the-art models BERT and T5. While our attack made answers appear less natural to humans in our experiments, it did not significantly increase the graders’ suspicions of cheating. Based on our experiments, we provide recommendations for utilizing automatic grading systems more safely in practice.

Funder

Hessian State Chancellery in the Department of Digital Strategy and Development

Technische Universität Darmstadt

Publisher

Springer Science and Business Media LLC

Subject

Computational Theory and Mathematics,Education

Reference100 articles.

1. Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410–14430. https://doi.org/10.1109/ACCESS.2018.2807385

2. Alexandron, G., Ruipérez-Valiente, J. A., Lee, S., & Pritchard, D. E. (2018). Evaluating the robustness of learning analytics results against fake learners. In V. Pammer-Schindler, M. Pérez-Sanagustín, H. Drachsler, R. Elferink, & M. Scheffel (Eds.), Lifelong Technology-Enhanced Learning (pp. 74–87). Springer International Publishing. https://doi.org/10.1007/978-3-319-98572-5_6

3. Alexandron, G., Yoo, L. Y., & Ruip´erez-Valiente JA, Lee S, Pritchard DE,. (2019). Are mooc learning analytics results trustworthy? with fake learners, they might not be! International Journal of Artificial Intelligence in Education, 29(4), 484–506. https://doi.org/10.1007/s40593-019-00183-1

4. Alzantot, M., Sharma, Y., Elgohary, A., Ho, B. J., Srivastava, M., & Chang, K. W. (2018). Generating natural language adversarial examples. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Brussels, Belgium, pp. 2890–2896. https://doi.org/10.18653/v1/D18-1316.https://aclanthology.org/D18-1316. Accessed 02 May 2023

5. Amidei, J., Piwek, P., & Willis, A. (2019). Agreement is overrated: A plea for correlation to assess human evaluation reliability. In: Proceedings of the 12th International Conference on Natural Language Generation, Association for Computational Linguistics, Tokyo, Japan, pp. 344–354. https://doi.org/10.18653/v1/W19-8642. https://aclanthology.org/W19-8642. Accessed 02 May 2023

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Short-Answer Grading for German: Addressing the Challenges;International Journal of Artificial Intelligence in Education;2023-12-07

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3