Author:
Angel Mirana,Patel Anuj,Alachkar Amal,Baldi Pierre
Abstract
AbstractObjectiveThis study aims to evaluate the capabilities and limitations of three large language models (LLMs) – GPT-3, GPT-4, and Bard, in the field of pharmaceutical sciences by assessing their pharmaceutical reasoning abilities on a sample North American Pharmacist Licensure Examination (NAPLEX). We also analyze the potential impacts of LLMs on pharmaceutical education and practice.MethodsA sample NAPLEX exam consisting of 137 multiple-choice questions was obtained from an online source. GPT-3, GPT-4, and Bard were used to answer the questions by inputting them into the LLMs’ user interface. The answers provided by the LLMs were then compared with the answer key.ResultsGPT-4 exhibited superior performance compared to GPT-3 and Bard, answering 78.8% of the questions correctly. This score was 11% higher than Bard and 27.7% higher than GPT-3. However, when considering questions that required multiple selections, the performance of each LLM decreased significantly. GPT-4, GPT-3, and Bard only correctly answered 53.6%, 13.9%, and 21.4% of these questions, respectively.ConclusionAmong the three LLMs evaluated, GPT-4 was the only model capable of passing the NAPLEX exam. Nevertheless, given the continuous evolution of LLMs, it is reasonable to anticipate that future models will effortlessly pass the exam. This highlights the significant potential of LLMs to impact the pharmaceutical field. Hence, we must evaluate both the positive and negative implications associated with the integration of LLMs in pharmaceutical education and practice.
Publisher
Cold Spring Harbor Laboratory
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献