Author:
Xu Steven,Most Amoreena,Chase Aaron,Hedrick Tanner,Murray Brian,Keats Kelli,Smith Susan,Barreto Erin,Liu Tianming,Sikora Andrea
Abstract
AbstractBackgroundLarge language models (LLMs) have shown capability in diagnosing complex medical cases and passing medical licensing exams, but to date, only limited evaluations have studied how LLMs interpret, analyze, and optimize complex medication regimens. The purpose of this evaluation was to test four LLMs ability to identify medication errors and appropriate medication interventions on complex patient cases from the intensive care unit (ICU).MethodsA series of eight patient cases were developed by critical care pharmacists including history of present illness, laboratory values, vital signs, and medication regimens. Then, four LLMs (ChatGPT (GPT-3.5), ChatGPT (GPT-4), Claude2, and Llama2-7b) were prompted to develop a medication regimen for the patient. LLM generated medication regimens were then reviewed by a panel of seven critical care pharmacists to assess for presence of medication errors and clinical relevance. For each medication regimen recommended by the LLM, clinicians were asked to assess for if they would continue a medication, identify perceived medication errors in the medications recommended, identify the presence of life-threatening medication choices, and rank overall agreement on a 5-point Likert scale.ResultsThe clinician panel rated to continue therapies recommended by the LLMs between 55.8-67.9% of the time. Clinicians perceived between 1.57-4.29 medication errors per recommended regimen, and life-threatening recommendations were present between 15.0-55.3% of the time. Level agreement was between 1.85-2.67 for the four LLMs.ConclusionsLLMs demonstrated potential to serve as clinical decision support for the management of complex medication regimens with further domain specific training; however, caution should be used when employing LLMs for medication management given the present capabilities.
Publisher
Cold Spring Harbor Laboratory