Large language models to differentiate vasospastic angina using patient information

Author:

Kiyohara YukoORCID,Kodera SatoshiORCID,Sato Masaya,Ninomiya KotaORCID,Sato Masataka,Shinohara Hiroki,Takeda NorifumiORCID,Akazawa Hiroshi,Morita Hiroyuki,Komuro Issei

Abstract

AbstractBackgroundVasospastic angina is sometimes suspected from patients’ medical history. It is essential to appropriately distinguish vasospastic angina from acute coronary syndrome because its standard treatment is pharmacotherapy, not catheter intervention. Large language models have recently been developed and are currently widely accessible. In this study, we aimed to use large language models to distinguish between vasospastic angina and acute coronary syndrome from patient information and compare the accuracies of these models.MethodWe searched for cases of vasospastic angina and acute coronary syndrome which were written in Japanese and published in online-accessible abstracts and journals, and randomly selected 66 cases as a test dataset. In addition, we selected another ten cases as data for few-shot learning. We used generative pre-trained transformer-3.5 and 4, and Bard, with zero- and few-shot learning. We evaluated the accuracies of the models using the test dataset.ResultsGenerative pre-trained transformer-3.5 with zero-shot learning achieved an accuracy of 52%, sensitivity of 68%, and specificity of 29%; with few-shot learning, it achieved an accuracy of 52%, sensitivity of 26%, and specificity of 86%. Generative pre-trained transformer-4 with zero-shot learning achieved an accuracy of 58%, sensitivity of 29%, and specificity of 96%; with few-shot learning, it achieved an accuracy of 61%, sensitivity of 63%, and specificity of 57%. Bard with zero-shot learning achieved an accuracy of 47%, sensitivity of 16%, and specificity of 89%; with few-shot learning, this model could not be assessed because it failed to produce output.ConclusionGenerative pre-trained transformer-4 with few-shot learning was the best of all the models. The accuracies of models with zero- and few-shot learning were almost the same. In the future, models could be made more accurate by combining text data with other modalities.

Publisher

Cold Spring Harbor Laboratory

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3