eOSCE stations live versus remote evaluation and scores variability

Author:

Bouzid Donia,Mullaert Jimmy,Ghazali Aiham,Ferré Valentine Marie,Mentré France,Lemogne Cédric,Ruszniewski Philippe,Faye Albert,Dinh Alexy Tran,Mirault Tristan,Smadja Nathan Peiffer,Muller Léonore,Pierrotin Laure Falque,Thy Michael,Assadi Maksud,Yung Sonia,de Tymowski Christian,le Hingrat Quentin,Eyer Xavier,Wicky Paul Henri,Oualha Mehdi,Houdouin Véronique,Jabre Patricia,Vodovar Dominique,Dioguardi Burgio Marco,Zucman Noémie,Tsopra Rosy,Tazi Asmaa,Ressaire Quentin,Nguyen Yann,Girard Muriel,Frachon Adèle,Depret François,Pellat Anna,de Masson Adèle,Azais Henri,de Castro Nathalie,Jeantrelle Caroline,Javaud Nicolas,Malmartel Alexandre,de Margerie Constance Jacquin,Chousterman Benjamin,Fournel Ludovic,Holleville Mathilde,Blanche Stéphane,

Abstract

Abstract Background Objective structured clinical examinations (OSCEs) are known to be a fair evaluation method. These recent years, the use of online OSCEs (eOSCEs) has spread. This study aimed to compare remote versus live evaluation and assess the factors associated with score variability during eOSCEs. Methods We conducted large-scale eOSCEs at the medical school of the Université de Paris Cité in June 2021 and recorded all the students’ performances, allowing a second evaluation. To assess the agreement in our context of multiple raters and students, we fitted a linear mixed model with student and rater as random effects and the score as an explained variable. Results One hundred seventy observations were analyzed for the first station after quality control. We retained 192 and 110 observations for the statistical analysis of the two other stations. The median score and interquartile range were 60 out of 100 (IQR 50–70), 60 out of 100 (IQR 54–70), and 53 out of 100 (IQR 45–62) for the three stations. The score variance proportions explained by the rater (ICC rater) were 23.0, 16.8, and 32.8%, respectively. Of the 31 raters, 18 (58%) were male. Scores did not differ significantly according to the gender of the rater (p = 0.96, 0.10, and 0.26, respectively). The two evaluations showed no systematic difference in scores (p = 0.92, 0.053, and 0.38, respectively). Conclusion Our study suggests that remote evaluation is as reliable as live evaluation for eOSCEs.

Publisher

Springer Science and Business Media LLC

Subject

Education,General Medicine

Cited by 6 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3