Improving Medical Entity Recognition in Spanish by Means of Biomedical Language Models
-
Published:2023-12-02
Issue:23
Volume:12
Page:4872
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Villaplana Aitana1, Martínez Raquel2ORCID, Montalvo Soto3ORCID
Affiliation:
1. VÓCALI Sistemas Inteligentes S.L., Parque Científico de Murcia, Carretera de Madrid km 388, Complejo de Espinardo, 30100 Murcia, Spain 2. Dept. of Lenguajes y Sistemas Informáticos, Escuela Técnica Superior de Ingeniería Informática, Universidad Nacional de Educación a Distancia, Juan del Rosal 16, 28040 Madrid, Spain 3. Dept. Informática y Estadística, Escuela Técnica Superior de Ingeniería Informática, Universidad Rey Juan Carlos, C/Tulipán s/n, 28933 Móstoles, Spain
Abstract
Named Entity Recognition (NER) is an important task used to extract relevant information from biomedical texts. Recently, pre-trained language models have made great progress in this task, particularly in English language. However, the performance of pre-trained models in the Spanish biomedical domain has not been evaluated in an experimentation framework designed specifically for the task. We present an approach for named entity recognition in Spanish medical texts that makes use of pre-trained models from the Spanish biomedical domain. We also use data augmentation techniques to improve the identification of less frequent entities in the dataset. The domain-specific models have improved the recognition of name entities in the domain, beating all the systems that were evaluated in the eHealth-KD challenge 2021. Language models from the biomedical domain seem to be more effective in characterizing the specific terminology involved in this task of named entity recognition, where most entities correspond to the "concept" type involving a great number of medical concepts. Regarding data augmentation, only back translation has slightly improved the results. Clearly, the most frequent types of entities in the dataset are better identified. Although the domain-specific language models have outperformed most of the other models, the multilingual generalist model mBERT obtained competitive results.
Funder
DOTT-HEALTH ISCIII Rey Juan Carlos University GELP
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference37 articles.
1. A Survey on Deep Learning for Named Entity Recognition;Li;IEEE Trans. Knowl. Data Eng.,2022 2. Bose, P., Srinivasan, S., Sleeman, W.C., Palta, J., Kapoor, R., and Ghosh, P. (2021). A Survey on Recent Named Entity Recognition and Relationship Extraction Techniques on Clinical Texts. Appl. Sci., 11. 3. A comparative study of pre-trained language models for named entity recognition in clinical trial eligibility criteria from multiple corpora;Li;BMC Med. Inform. Decis. Mak.,2022 4. Miranda-Escalada, M., Gascó, L., Lima-López, S., Farré-Maduell, E., Estrada, D., Nentidis, A., Krithara, A., Katsimpras, G., Paliouras, G., and Krallinger, M. (2022, January 5–8). Overview of DisTEMIST at BioASQ: Automatic detection and normalization of diseases from clinical texts: Results, methods, evaluation and multilingual resources. Proceedings of the Working Notes of Conference and Labs of the Evaluation (CLEF) Forum, CEUR Workshop Proceedings, Bologna, Italy. 5. Gasco Sánchez, L., Estrada Zavala, D., Farré-Maduell, D., Lima-López, S., Miranda-Escalada, A., and Krallinger, M. (2022, January 12–17). The SocialDisNER shared task on detection of disease mentions in health-relevant content from social media: Methods, evaluation, guidelines and corpora. Proceedings of the Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task, Gyeongju, Republic of Korea.
|
|