Item analysis of general surgery multi-institutional mock oral exam: opportunities for quality improvement

Author:

Andres JeromeORCID,Huang Ivy A.,Tillou Areti,Wagner Justin P.,Lewis Catherine E.,Amersi Farin F.,Donahue Timothy R.,Chen Formosa C.,Wu James X.

Abstract

Abstract Purpose Mock oral examinations (MOE) prepare general surgery residents for the American Board of Surgery Certifying Exam by assessing their medical knowledge and clinical judgement. There is no standard accepted process for quality analysis among MOE content items. Effective questions should correlate with mastery of MOE content, as well as exam passage. Our aim was to identify opportunities for question improvement via item analysis of a standardized MOE. Methods Retrospective review of testing data from the 2022 Southern California Virtual MOE, which examined 64 general surgery residents from six training programs. Each resident was assessed with 73 exam questions distributed through 12 standardized cases. Study authors indexed questions by clinical topic (e.g. breast, trauma) and competency category (e.g. professionalism, operative approach). We defined MOE passage as mean percentage correct and mean room score within 1 standard deviation of the mean or higher. Questions were assessed for difficulty, discrimination between PGY level, and correlation with MOE passage. Results Passage rate was 77% overall (49/64 residents), with no differences between postgraduate year (PGY) levels. PGY3 residents answered fewer questions correctly vs PGY4 residents (72% vs 78%, p < 0.001) and PGY5 residents (72% vs 82%, p < 0.001). Out of 73 total questions, 17 questions (23.2%) significantly correlated with MOE passage or failure. By competency category, these were predominantly related to patient care (52.9%) and operative approach (23.5%), with fewer related to diagnosis (11.8%), professional behavior (5.9%), and decision to operate (5.9%). By clinical topic these were equally distributed between trauma (17.7%), large intestine (17.7%), endocrine (17.7%), and surgical critical care (17.7%), with fewer in breast (11.8%), stomach (11.8%), and pediatric surgery (5.9%). We identified two types of ineffective questions: 1) questions answered correctly by 100% of test-takers with no discriminatory ability (n = 3); and 2) questions that varied inversely with exam passage (n = 11). In total, 19% (14/73) of exam questions were deemed ineffective. Conclusions Item analysis of multi-institutional mock oral exam found that 23% of questions correlated with exam passage or failure, effectively discriminating which examinees had mastery of MOE content. We also recognized 19% of questions as ineffective that can be targeted for improvement.

Publisher

Springer Science and Business Media LLC

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3