Abstract
ABSTRACTThe emergence of Large Language Models (LLM) with remarkable performance such as ChatGPT and GPT-4, has led to an unprecedented uptake in the population. One of their most promising and studied applications concerns education due to their ability to understand and generate human-like text, creating a multitude of opportunities for enhancing educational practices and outcomes. The objective of this study is two-fold: to assess the accuracy of ChatGPT/GPT-4 in answering rheumatology questions from the access exam to specialized medical training in Spain (MIR), and to evaluate the medical reasoning followed by these LLM to answer those questions. A dataset, RheumaMIR, of 145 rheumatology-related questions, extracted from the exams held between 2010 and 2023, was created for that purpose, used as a prompt for the LLM, and was publicly distributed. Six rheumatologists with clinical and teaching experience evaluated the clinical reasoning of the chatbots using a 5-point Likert scale and their degree of agreement was analyzed. The association between variables that could influence the models’ accuracy (i.e., year of the exam question, disease addressed, type of question and genre) was studied. ChatGPT demonstrated a high level of performance in both accuracy, 66.43%, and clinical reasoning, median (Q1-Q3), 4.5 (2.33-4.67). However, GPT-4 showed better performance with an accuracy score of 93.71% and a median clinical reasoning value of 4.67 (4.5-4.83). These findings suggest that LLM may serve as valuable tools in rheumatology education, aiding in exam preparation and supplementing traditional teaching methods.What is already known on this topicLarge Language Models have demonstrated remarkable performance when presented with medical exam questions. However, no study has evaluated their clinical reasoning in the rheumatology field.What this study addsThis is the first study to evaluate the accuracy and clinical reasoning of ChatGPT and GPT-4 when rheumatology questions from an official access exam to specialized medical training are used as prompts.How this study might affect research, practice or policy?This study highlights the usefulness of two Large Language Models, ChatGPT and GPT-4, in the training of medical students in the field of rheumatology.HighlightsChatGPT showed an accuracy of 66.43% in answering MIR questions, while GPT-4 exhibits a significantly higher proficiency with an accuracy of 93.71%.The median (Q1-Q3) value of the average score for the clinical reasoning of GPT-4 was 4.67 (4.5-4.83), while for ChatGPT was 4.5 (2.33-4.67).
Publisher
Cold Spring Harbor Laboratory
Reference48 articles.
1. Dennean K , Gantori S , Limas DK , Pu A , Gilligan R. Let’s chat about ChatGPT. UBS Financial Services Inc. and UBS AG Singapore Branch and UBS AG Hong Kong Branch; 2023. Available from: https://www.ubs.com/global/en/wealth-management/our-approach/marketnews/article.1585717.html.
2. ChatGPT and the Future of Medical Writing
3. The potential impact of ChatGPT in clinical and translational medicine;Clinical and Translational Medicine,2023
4. Krumborg JR , Mikkelsen N , Damkier P , Ennis ZN , Henriksen DP , Lillevang-Johansen M , et al. ChatGPT: First glance from a perspective of clinical pharmacology. Basic & Clinical Pharmacology & Toxicology. 2023;n/a(n/a). Available from: https://onlinelibrary.wiley.com/doi/abs/10.1111/bcpt.13879.
5. The role of ChatGPT in scientific communication: writing better scientific review articles;American journal of cancer research,2023
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献