Author:
Park Hyeryun,Son Jiye,Min Jeongwon,Choi Jinwook
Abstract
AbstractOne of the artificial intelligence applications in the biomedical field is knowledge-intensive question-answering. As domain expertise is particularly crucial in this field, we propose a method for efficiently infusing biomedical knowledge into pretrained language models, ultimately targeting biomedical question-answering. Transferring all semantics of a large knowledge graph into the entire model requires too many parameters, increasing computational cost and time. We investigate an efficient approach that leverages adapters to inject Unified Medical Language System knowledge into pretrained language models, and we question the need to use all semantics in the knowledge graph. This study focuses on strategies of partitioning knowledge graph and either discarding or merging some for more efficient pretraining. According to the results of three biomedical question answering finetuning datasets, the adapters pretrained on semantically partitioned group showed more efficient performance in terms of evaluation metrics, required parameters, and time. The results also show that discarding groups with fewer concepts is a better direction for small datasets, and merging these groups is better for large dataset. Furthermore, the metric results show a slight improvement, demonstrating that the adapter methodology is rather insensitive to the group formulation.
Funder
Ministry of Health & Welfare
Publisher
Springer Science and Business Media LLC
Reference39 articles.
1. Jin, Q. et al. biomedical question answering: A survey of approaches and challenges. ACM Comput. Surv. 55(2), 1–36 (2022).
2. Au, Y. J. et al. AI chatbots not yet ready for clinical use. Front. Digit. Health. 5, 60 (2023).
3. Petroni, F. et al. KILT: A benchmark for knowledge intensive language tasks. In Proc. NAACL: Human Language Technologies 2523–2544 https://doi.org/10.18653/v1/2021.naacl-main.200 (2021).
4. Faldu, K., Sheth, A., Kikani, P. & Akbari, H. KI-BERT: Infusing knowledge context for better language and domain understanding. Preprint at https://arxiv.org/abs/2104.08145 (2021).
5. Wang R. et al. K-adapter: Infusing knowledge into pre-trained models with adapters. In Proc. ACL-IJCNLP: Findings of the Association for Computational Linguistics 1405–1418 (2021).