Abstract
AbstractObjectiveRecent advancements in GPT-4 have enabled analysis of text with visual data. Diagnosis in ophthalmology is often based on ocular examinations and imaging, alongside the clinical context. The aim of this study was to evaluate the performance of multimodal GPT-4 (GPT-4V) in an integrated analysis of ocular images and clinical text.MethodsThis retrospective study included 40 patients seen in our institution with ocular pathologies. Cases were selected by a board certified ophthalmologist, to represent various pathologies and match the level for ophthalmology residents. We provided the model with each image, without and then with the clinical context. We also asked two non-ophthalmology physicians to write diagnoses for each image, without and then with the clinical context. Answers for both GPT-4V and the non-ophthalmologists were evaluated by two board-certified ophthalmologists. Performance accuracies were calculated and compared.ResultsGPT-4V provided the correct diagnosis in 19/40 (47.5%) cases based on images without clinical context, and in 27/40 (67.5%) cases when clinical context was provided. Non-ophthalmologists physicians provided the correct diagnoses in 24/40 (60.0%), and 23/40 (57.5%) of cases without clinical context, and in 29/40 (72.5%) and 27/40 (67.5%) with clinical context.ConclusionGPT-4V at its current stage is not yet suitable for clinical application in ophthalmology. Nonetheless, its ability to simultaneously analyze and integrate visual and textual data, and arrive at accurate clinical diagnoses in the majority of cases, is impressive. Multimodal large language models like GPT-4V have significant potential to advance both patient care and research in ophthalmology.
Publisher
Cold Spring Harbor Laboratory
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献