Author:
Ahmed Hosnia M. M.,Sorour Shaymaa E.
Abstract
AbstractEvaluating the quality of university exam papers is crucial for universities seeking institutional and program accreditation. Currently, exam papers are assessed manually, a process that can be tedious, lengthy, and in some cases, inconsistent. This is often due to the focus on assessing only the formal specifications of exam papers. This study develops an intelligent system for the automatic evaluation of university exam papers in terms of form and content, ensuring adherence to quality standards. The system is composed of two subsystems: the first evaluates compliance with formal specifications, and the second analyzes the content. The content analysis involves automatically categorizing exam questions based on Bloom's cognitive levels (BCLs) and determining the representation ratio of these levels in the exam paper. This subsystem comprises four main modules: 1) question collection, 2) text pre-processing using natural language processing (NLP) methods, 3) feature engineering using the CountVectorizer method to convert questions into feature vectors, and 4) a classification module based on the Logistic Regression (LR) algorithm to categorize exam questions into categories like knowledge, comprehension, application, analysis, synthesis, and evaluation. Experimental results indicate that the system achieves an average accuracy of 98.5%.
Publisher
Springer Science and Business Media LLC