BACKGROUND
Rare diseases affect millions worldwide but often face limited research focus due to their low prevalence. This results in prolonged diagnoses and a lack of approved therapies. Recent advancements in Large Language Models (LLMs) have shown promise in automating the extraction of medical information, offering potential to improve rare disease diagnosis and management.
OBJECTIVE
Our objective is to create an end-to-end system called AutoRD, which automates extracting information from medical text about rare diseases. We conducted various experiments to evaluate the performance of AutoRD and highlighted its strengths and limitations in this paper.
METHODS
AutoRD is a pipeline system involving data preprocessing, entity extraction, relation extraction, entity calibration, and knowledge graph construction. We implement this using large language models and medical knowledge graphs developed from open-source medical ontologies. We quantitatively evaluate our system on entity extraction, relation extraction, and the performance of knowledge graph construction.
RESULTS
AutoRD achieves an average of entity F1 score and relation F1 score of 47.3%, a 14.4% improvement compared to the base LLM. In detail, AutoRD achieves an overall entity extraction F1 score of 56.1% (‘rare_disease’: 83.5%, disease: 35.8%, ‘symptom_and_sign’: 46.1%, anaphor: 67.5%) and an overall relation extraction F1 score of 38.6% (‘produces’: 34.7%, ‘increases_risk_of’: 12.4%, ‘is_a’: 37.4%, ‘is_acronym’: 44.1%, ‘is_synonym’: 16.3%, ‘anaphora’: 57.5%). The qualitative experiment also demonstrates that the constructed knowledge graph’s performance is respectable.
CONCLUSIONS
AutoRD is an automated end-to-end system for extracting rare disease information from text to build knowledge graphs. It uses ontology-enhanced LLMs for a robust medical knowledge base. The superior performance of AutoRD is validated by experimental evaluations, demonstrating the potential of LLMs in healthcare.
CLINICALTRIAL