Affiliation:
1. Department of Surgery Peninsula Health Melbourne Victoria Australia
2. Faculty of Science, Medicine, and Health Monash University Melbourne Victoria Australia
Abstract
AbstractBackgroundThe COVID‐19 pandemic has significantly disrupted clinical experience and exposure of medical students and junior doctors. Artificial Intelligence (AI) integration in medical education has the potential to enhance learning and improve patient care. This study aimed to evaluate the effectiveness of three popular large language models (LLMs) in serving as clinical decision‐making support tools for junior doctors.MethodsA series of increasingly complex clinical scenarios were presented to ChatGPT, Google's Bard and Bing's AI. Their responses were evaluated against standard guidelines, and for reliability by the Flesch Reading Ease Score, Flesch–Kincaid Grade Level, the Coleman‐Liau Index, and the modified DISCERN score for assessing suitability. Lastly, the LLMs outputs were assessed by using the Likert scale for accuracy, informativeness, and accessibility by three experienced specialists.ResultsIn terms of readability and reliability, ChatGPT stood out among the three LLMs, recording the highest scores in Flesch Reading Ease (31.2 ± 3.5), Flesch–Kincaid Grade Level (13.5 ± 0.7), Coleman–Lau Index (13) and DISCERN (62 ± 4.4). These results suggest statistically significant superior comprehensibility and alignment with clinical guidelines in the medical advice given by ChatGPT. Bard followed closely behind, with BingAI trailing in all categories. The only non‐significant statistical differences (P > 0.05) were found between ChatGPT and Bard's readability indices, and between the Flesch Reading Ease scores of ChatGPT/Bard and BingAI.ConclusionThis study demonstrates the potential utility of LLMs in fostering self‐directed and personalized learning, as well as bolstering clinical decision‐making support for junior doctors. However further development is needed for its integration into education.
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献