BACKGROUND
Extracting valuable information from clinical text data is critical in disease progression studies. Traditional methods are often unable to cope with the complexity and volume of such data. The emergence of Large Language Models (LLMs) has opened new avenues, but they are challenged by critical issues such as data security and feature hallucination.
OBJECTIVE
The primary objective of this study is to utilize a modular LLM approach to efficiently and accurately extract features from clinical text data, addressing the specific challenges of data security and feature hallucination, and improving upon the limitations of traditional methods.
METHODS
In this study, we introduced a modular LLM approach to extract features from patient admission records. The process was divided into distinct steps: concept extraction, aggregation, question generation, corpus extraction, and Q&A scale extraction. Our method was evaluated on a dataset comprising 25,709 pregnancy cases from the People's Hospital of Guangxi Zhuang Autonomous Region, China, utilizing two low-parameter LLMs, Qwen-14B-Chat (QWEN) and Baichuan2-13B-Chat (BAICHUAN).
RESULTS
The approach achieved high precision in features extraction, with QWEN and BAICHUAN showing average accuracies of 95.52% and 95.86%, respectively. The models demonstrated low null ratios (<0.21%) and varied time consumption. We also experimented the INT4-quantified version of QWEN (QWEN (INT4)) on a consumer-grade GPU, achieved even better performance (97.28% accuracy and a 0% null ratio).
CONCLUSIONS
This study demonstrates the effectiveness of a modular LLM approach in extracting clinical text data with high accuracy and efficiency. By breaking down the extraction process into manageable components, this approach offers a promising solution for textual features extraction from patient documentation.