BioBERT: a pre-trained biomedical language representation model for biomedical text mining

Author:

Lee Jinhyuk1ORCID,Yoon Wonjin1ORCID,Kim Sungdong2ORCID,Kim Donghyeon1ORCID,Kim Sunkyu1ORCID,So Chan Ho3ORCID,Kang Jaewoo13ORCID

Affiliation:

1. Department of Computer Science and Engineering, Korea University, Seoul, Korea

2. Clova AI Research, Naver Corp, Seong-Nam, Korea

3. Interdisciplinary Graduate Program in Bioinformatics, Korea University, Seoul, Korea

Abstract

Abstract Motivation Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. Availability and implementation We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.

Funder

National Research Foundation of Korea

NRF

Publisher

Oxford University Press (OUP)

Subject

Computational Mathematics,Computational Theory and Mathematics,Computer Science Applications,Molecular Biology,Biochemistry,Statistics and Probability

Reference38 articles.

1. Publicly available clinical bert embeddings;Alsentzer;Proceedings of the 2nd Clinical Natural Language Processing Workshop, Minneapolis, MN, USA,2019

2. Automatic extraction of gene-disease associations from literature using joint ensemble learning;Bhasuran;PLoS One,2018

3. Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research;Bravo;BMC Bioinformatics,2015

4. Bert: pre-training of deep bidirectional transformers for language understanding;Devlin;Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA,2019

5. NCBI disease corpus: a resource for disease name recognition and concept normalization;Doğan;J. Biomed. Inform,2014

Cited by 2814 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3