Affiliation:
1. University of Peradeniya
Abstract
Abstract
Background
In the realm of medical education, evaluating student performance is crucial for refining teaching strategies and identifying areas of improvement. This study explores the potential of using exam scores to predict future academic outcomes. By employing statistical methods and machine learning we investigate how scores from different exams interrelate and influence student progress.
Methods
The study was conducted at the Department of Pharmacology, Faculty of Medicine, Peradeniya, Sri Lanka. The results of the Foundation (F), Systematic Pharmacology 1 (S1) and Systematic Pharmacology 2 (S2) examinations of three consecutive batches of medical students were extracted and de-identified. The data set was randomly split into a 70% training set and a 30% test. A multiple linear regression model, random forest model, k-nearest neighbour model and a support-vector machine model were fit to predict the score of S2 using F and S1 scores. Receiver operating characteristic (ROC) curves were constructed on training data to predict the performance of S1 and S2 using marks of the preceding examinations. The linear regression model was validated by running the predictions on the test set. The accuracy measures were calculated for the cutoff score established by the training data. Odds ratios were computed to assess the association between failing an exam and the likelihood of failing the subsequent exams.
Results
The results of 583 students were analyzed. The multiple linear regression model had a residual standard error of 8.21 and an adjusted R squared value of 0.45. The F statistic was 84.5 (p-value < 0.001). The ROC curve for the model predicting S2 performance using the linear combination of F and S1 scores had an AUC of 87% for training data and 88% for testing data. The sensitivity and the specificity for unseen test data were 100% and 64.7% respectively.
Conclusion
S2 performance could be predicted using the F and S1 scores with 100% sensitivity and 64.7% specificity. Thus, this model could be used in the early identification of students with a potential to fail in future exams which will enable early and personalized interventions and implementation of corrective measures.
Publisher
Research Square Platform LLC
Reference18 articles.
1. “Even after thirteen. class exams, students are still overconfident: the role of memory for past exam performance in student predictions | SpringerLink.” Accessed: Oct. 17, 2023. [Online]. Available: https://link.springer.com/article/10.1007/s11409-016-9158-6.
2. : “Feedback and Self-Regulated Learning, Theoretical Synthesis A, Butler DL, Winne PH. 1995.” Accessed: Oct. 17, 2023. [Online]. Available: https://journals.sagepub.com/doi/abs/10.3102/00346543065003245.
3. L. CORNO, “The Best-Laid Plans: Modern Conceptions of Volition and Educational Research,” Educ. Res., vol. 22, no. 2, pp. 14–22, Mar. 1993, 10.3102/0013189X022002014.
4. Confidence-judgment accuracy as a predictor of test performance;Shaughnessy JJ;J Res Personal
5. Training metacognition in the classroom: The influence of incentives and feedback on exam predictions;Miller TM;Metacognition Learn,2011