Advancing entity recognition in biomedicine via instruction tuning of large language models

Author:

Keloth Vipina K1ORCID,Hu Yan2ORCID,Xie Qianqian1,Peng Xueqing1,Wang Yan1,Zheng Andrew3,Selek Melih4,Raja Kalpana1,Wei Chih Hsuan5ORCID,Jin Qiao5ORCID,Lu Zhiyong5ORCID,Chen Qingyu15ORCID,Xu Hua1

Affiliation:

1. Section of Biomedical Informatics and Data Science, School of Medicine, Yale University , New Haven, CT-06510, United States

2. McWilliams School of Biomedical Informatics, University of Texas Health Science at Houston , Houston, TX-77030, United States

3. William P. Clements High School , Sugar Land, TX-77479, United States

4. Stephen F. Austin High School , Sugar Land, TX-77498, United States

5. National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health , Bethesda, MD-20894, United States

Abstract

Abstract Motivation Large Language Models (LLMs) have the potential to revolutionize the field of Natural Language Processing, excelling not only in text generation and reasoning tasks but also in their ability for zero/few-shot learning, swiftly adapting to new tasks with minimal fine-tuning. LLMs have also demonstrated great promise in biomedical and healthcare applications. However, when it comes to Named Entity Recognition (NER), particularly within the biomedical domain, LLMs fall short of the effectiveness exhibited by fine-tuned domain-specific models. One key reason is that NER is typically conceptualized as a sequence labeling task, whereas LLMs are optimized for text generation and reasoning tasks. Results We developed an instruction-based learning paradigm that transforms biomedical NER from a sequence labeling task into a generation task. This paradigm is end-to-end and streamlines the training and evaluation process by automatically repurposing pre-existing biomedical NER datasets. We further developed BioNER-LLaMA using the proposed paradigm with LLaMA-7B as the foundational LLM. We conducted extensive testing on BioNER-LLaMA across three widely recognized biomedical NER datasets, consisting of entities related to diseases, chemicals, and genes. The results revealed that BioNER-LLaMA consistently achieved higher F1-scores ranging from 5% to 30% compared to the few-shot learning capabilities of GPT-4 on datasets with different biomedical entities. We show that a general-domain LLM can match the performance of rigorously fine-tuned PubMedBERT models and PMC-LLaMA, biomedical-specific language model. Our findings underscore the potential of our proposed paradigm in developing general-domain LLMs that can rival SOTA performances in multi-task, multi-domain scenarios in biomedical and health applications. Availability and implementation Datasets and other resources are available at https://github.com/BIDS-Xu-Lab/BioNER-LLaMA.

Funder

National Institutes of Health

Intramural Research Program of the National Library of Medicine

Publisher

Oxford University Press (OUP)

Reference63 articles.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3