Abstract
Background
In medical education, the need to obtain reliable and valid assessments is critical for the learning process. This study implemented a post-validation item analysis to create a supply of valid questions for incorporation into the question bank.
Methods
A cross-sectional study was performed in the College of Medicine, University of Bisha, Saudi Arabia. The study was targeting 250 items and 750 distractors from 2017 to 2020. The post-validation item analysis was done to evaluate the quality of the items using test-scoring and reporting software. Data were analysed by SPSS Version 25. Quantitative variables were expressed as mean (SD), while qualitative variables were expressed as number and percentage. An independent t-test was done to reveal the association between the item analysis parameters. A value of p<0.05 was considered statistically significant.
Results
The mean difficulty index (DIF I), discrimination index (DI) and distractors efficacy (DE) were 73.8, 0.26 and 73.5%, respectively. Of 250 items, 38.8% had an acceptable DIF I (30%–70%) and 66.4% had ‘good to excellent’ DI (>0.2). Of 750 distractors, 33.6%, 37%, 20% and 9.2% had zero, one, two and three non-functional distractors, respectively. The mean Kuder–Richardson was 0.76. The DIF I was significantly associated with DE (p=0.048). The post-validation item analysis of this study showed that a considerable proportion of questions had acceptable parameters and were recommended for item banking. However, some questions needed to be rephrased and reassessed or discarded.
Conclusion
Three-option multiple-choice questions should be considered for future examinations to improve the assessment process.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献