Leveraging Medical Knowledge Graphs and Large Language Models for Enhanced Mental Disorder Information Extraction
-
Published:2024-07-24
Issue:8
Volume:16
Page:260
-
ISSN:1999-5903
-
Container-title:Future Internet
-
language:en
-
Short-container-title:Future Internet
Author:
Park Chaelim1, Lee Hayoung1, Jeong Ok-ran1
Affiliation:
1. School of Computing, Gachon University, 1342 Sujeong-gu, Seongnam-si 13120, Republic of Korea
Abstract
The accurate diagnosis and effective treatment of mental health disorders such as depression remain challenging owing to the complex underlying causes and varied symptomatology. Traditional information extraction methods struggle to adapt to evolving diagnostic criteria such as the Diagnostic and Statistical Manual of Mental Disorders fifth edition (DSM-5) and to contextualize rich patient data effectively. This study proposes a novel approach for enhancing information extraction from mental health data by integrating medical knowledge graphs and large language models (LLMs). Our method leverages the structured organization of knowledge graphs specifically designed for the rich domain of mental health, combined with the powerful predictive capabilities and zero-shot learning abilities of LLMs. This research enhances the quality of knowledge graphs through entity linking and demonstrates superiority over traditional information extraction techniques, making a significant contribution to the field of mental health. It enables a more fine-grained analysis of the data and the development of new applications. Our approach redefines the manner in which mental health data are extracted and utilized. By integrating these insights with existing healthcare applications, the groundwork is laid for the development of real-time patient monitoring systems. The performance evaluation of this knowledge graph highlights its effectiveness and reliability, indicating significant advancements in automating medical data processing and depression management.
Reference44 articles.
1. Gutierrez, B.J., McNeal, N., Washington, C., Chen, Y., Li, L., Sun, H., and Su, Y. (2022). Thinking about gpt-3 in-context learning for biomedical ie? think again. arXiv. 2. Wang, Y., Zhao, Y., and Petzold, L. (2023). Are large language models ready for healthcare? A comparative study on clinical language understanding. Machine Learning for Healthcare Conference, PMLR. 3. BioKnowPrompt: Incorporating imprecise knowledge into prompt-tuning verbalizer with biomedical text for relation extraction;Li;Inf. Sci.,2022 4. Kartchner, D., Ramalingam, S., Al-Hussaini, I., Kronick, O., and Mitchell, C. (2023, January 13). Zero-Shot Information Extraction for Clinical Meta-Analysis using Large Language Models. Proceedings of the 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, Toronto, ON, Canada. 5. Chia, Y.K., Bing, L., Poria, S., and Si, L. (2022). RelationPrompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction. arXiv.
|
|