Affiliation:
1. The Department of Computer Science and Information Technology, School of Electrical Engineering, University of Belgrade, Bulevar kralja Aleksandra, 11000 Belgrade, Serbia
2. Faculty of Informatics and Computing, Singidunum University, Danijelova 32, 11000 Belgrade, Serbia
Abstract
The advent of digital technology has revolutionized numerous aspects of modern life, including the field of assessment and testing. However, paper tests, despite their seemingly archaic nature, continue to hold a prominent position in various assessment domains. The accessibility, familiarity, security, cost-effectiveness, and versatility of paper tests collectively contribute to their continued prominence. Hence, numerous educational institutions responsible for conducting examinations involving a substantial number of candidates continue to rely on paper tests. Consequently, there arises a demand for the possibility of automated assessment of these tests, aiming to alleviate the burden on teaching staff, enhance objectivity in evaluation, and expedite the delivery of test results. Therefore, diverse software systems have been developed, showcasing the capability to automatically score specific question types. Thus, it becomes imperative to categorize related question types systematically, thereby facilitating a preliminary classification based on the content and format of the questions. This classification serves the purpose of enabling effective comparison among existing software solutions. In this research paper, we present the implementation of such a software system using artificial intelligence techniques, progressively expanding its capabilities to evaluate increasingly complex question types, with the ultimate objective of achieving a comprehensive evaluation of all question types encountered in paper-based tests. The system detailed above demonstrated a recognition success rate of 99.89% on a curated dataset consisting of 734,825 multiple-choice answers. For the matching type, it achieved a recognition success rate of 99.91% on 86,450 answers. In the case of short answer type, the system achieved a recognition success rate of 95.40% on 129,675 answers.
Funder
Science Fund of the Republic of Serbia
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference29 articles.
1. Jocović, V., Đukić, J., and Mišić, M. (2019, January 29). First Experiences with Moodle and Coderunner Platforms in Programming Course. Proceedings of the Tenth International Conference on e-Learning, Belgrade Metropolitan University, Belgrade, Serbia.
2. Internet versus paper-and-pencil survey methods in psychological experiments: Equivalence testing of participant responses to health-related messages;Lewis;Aust. J. Psychol.,2009
3. Computer-based and paper-based testing: Does the test administration mode influence the reliability and validity of achievement tests?;J. Lang. Linguist. Stud.,2018
4. A comparison of computer based testing and paper and pencil testing in mathematics assessment;McClelland;Online J. New Horiz. Educ.,2020
5. Candrlic, S., Katić, M.A., and Dlab, M.H. (2014, January 26–30). Online vs. Paper-based testing: A comparison of test results. Proceedings of the 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献