Abstract
AbstractPrior work has shown that large language models (LLMs) have the ability to answer expert-level multiple choice questions in medicine, but are limited by both their tendency to hallucinate knowledge and their inherent inadequacy in performing basic mathematical operations. Unsurprisingly, early evidence suggests that LLMs perform poorly when asked to execute common clinical calculations. Recently, it has been demonstrated that LLMs have the capability of interacting with external programs and tools, presenting a possible remedy for this limitation. In this study, we explore the ability of ChatGPT (GPT-4, November 2023) to perform medical calculations, evaluating its performance across 48 diverse clinical calculation tasks. Our findings indicate that ChatGPT is an unreliable clinical calculator, delivering inaccurate responses in one-third of trials (n=212). To address this, we developed an open-source clinical calculation API (openmedcalc.org), which we then integrated with ChatGPT. We subsequently evaluated the performance of this augmented model by comparing it against standard ChatGPT using 75 clinical vignettes in three common clinical calculation tasks: Caprini VTE Risk, Wells DVT Criteria, and MELD-Na. The augmented model demonstrated a marked improvement in accuracy over unimproved ChatGPT. Our findings suggest that integration of machine-usable, clinician-informed tools can help alleviate the reliability limitations observed in medical LLMs.
Publisher
Cold Spring Harbor Laboratory
Reference34 articles.
1. Nori, H. et al. Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine (2023). URL http://arxiv.org/abs/2311.16452.2311.16452.
2. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models
3. Singhal, K. et al. Towards Expert-Level Medical Question Answering with Large Language Models (2023). URL http://arxiv.org/abs/2305.09617.2305.09617.
4. Preprint – Augmentation of ChatGPT with Clinician-Informed Tools Improves Performance on Medical Calculation Tasks11 OpenAI. GPT-4 Technical Report (2023). URL http://arxiv.org/abs/2303.08774.2303.08774.
5. Chat GPT as a Neuro-Score Calculator: Analysis of a Large Language Model’s Performance on Various Neurological Exam Grading Scales;World Neurosurgery,2023
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献