Zero-Shot Multimodal Question Answering for Assessment of Medical Student OSCE Physical Exam Videos
Author:
Holcomb Michael J., Kang Shinyoung, Shakur Ameer, Vedovato Sol, Hein DavidORCID, Dalton Thomas O., Campbell Krystle K., Scott Daniel J., Danuser GaudenzORCID, Jamieson Andrew R.ORCID
Abstract
AbstractThe Objective Structured Clinical Examination (OSCE) is a critical component of medical education whereby the data gathering, clinical reasoning, physical examination, diagnostic and planning capabilities of medical students are assessed in a simulated outpatient clinical setting with standardized patient actors (SPs) playing the role of patients with a predetermined diagnosis, or case. This study is the first to explore the zero-shot automation of physical exam grading in OSCEs by applying multimodal question answering techniques to the analysis of audiovisual recordings of simulated medical student encounters. Employing a combination of large multimodal models (LLaVA-1.6 7B,13B,34B, GPT-4V, and GPT-4o), automatic speech recognition (Whisper v3), and large language models (LLMs), we assess the feasibility of applying these component systems to the domain of student evaluation without any retraining. Our approach converts video content into textual representations, encompassing the transcripts of the audio component and structured descriptions of selected video frames generated by the multimodal model. These representations, referred to as “exam stories,” are then used as context for an abstractive question-answering problem via an LLM. A collection of 191 audiovisual recordings of medical student encounters with an SP for a single OSCE case was used as a test bed for exploring relevant features of successful exams. During this case, the students should have performed three physical exams: 1) mouth exam, 2) ear exam, and 3) nose exam. These examinations were each scored by two trained, non-faculty standardized patient evaluators (SPE) using the audiovisual recordings—an experienced, non-faculty SPE adjudicated disagreements. The percentage agreement between the described methods and the SPEs’ determination of exam occurrence as measured by percentage agreement varied from 26% to 83%. The audio-only methods, which relied exclusively on the transcript for exam recognition, performed uniformly higher by this measure compared to both the image-only methods and the combined methods across differing model sizes. The outperformance of the transcript-only model was strongly linked to the presence of key phrases where the student-physician would “signpost” the progression of the physical exam for the standardized patient, either alerting when they were about to begin an examination or giving the patient instructions. Multimodal models offer tremendous opportunity for improving the workflow of the physical examinations’ evaluation, for example by saving time and guiding focus for better assessment. While these models offer the promise of unlocking audiovisual data for downstream analysis with natural language processing methods, our findings reveal a gap between the off-the-shelf AI capabilities of many available models and the nuanced requirements of clinical practice, highlighting a need for further development and enhanced evaluation protocols in this area. We are actively pursuing a variety of approaches to realize this vision.
Publisher
Cold Spring Harbor Laboratory
Reference21 articles.
1. AI,:, Alex Young , Bei Chen , Chao Li , Chengen Huang , Ge Zhang , Guanwei Zhang , Heng Li , Jiangcheng Zhu , Jianqun Chen , Jing Chang , Kaidong Yu , Peng Liu , Qiang Liu , Shawn Yue , Senbin Yang , Shiming Yang , Tao Yu , et al. 2024. Yi: Open Foundation Models by 01.AI. 2. Automated Patient Note Grading: Examining Scoring Reliability and Feasibility;Academic Medicine,2023 3. Wei-Lin Chiang , Zhuohan Li , Zi Lin , Ying Sheng , Zhanghao Wu , Hao Zhang , Lianmin Zheng , Siyuan Zhuang , Yonghao Zhuang , Joseph E. Gonzalez , Ion Stoica , and Eric P. Xing . 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. 4. A dataset of simulated patient-physician medical interviews with a focus on respiratory cases;Scientific Data,2022
|
|