BACKGROUND
Due to the limitations posed by small datasets, diverse writing styles, unstructured clinical records, and the necessity of semi-manual preprocessing, machine learning techniques for real-time ICD coding continue to face significant challenges.
OBJECTIVE
In this study, we developed a fully automatic pipeline from long free text to standard ICD codes, which integrated medical pre-trained and keyword filtration BERT, fine-tuning, and task-specific prompt learning with mixed templates and soft verbalizers.
METHODS
We integrated four components into our framework: a medical pre-trained BERT, a keyword filtration BERT, a fine-tuning phase, and task-specific prompt learning which utilized mixed templates and soft verbalizers. This framework was validated on a multi-center medical dataset for the automated ICD coding of 13 common cardiovascular diseases. Its performance was compared against RoBERTa, XLNet, and different BERT-based fine-tuning pipelines. Additionally, we evaluated the performance of our framework under different prompt learning and fine-tuning settings. Further, few-shot learning was conducted to assess the feasibility and efficacy of our framework in scenarios involving small to mid-sized datasets.
RESULTS
Compared to traditional pre-training and fine-tuning pipelines, our approach achieved a significantly higher micro-F1 score of 0.838 and a macro-AUC of 0.958. Among different prompt learning setups, the mixed template and soft verbalizer combination yielded the best performance. Few-shot experiments indicated that performance stabilized and peaked at 500 shots.
CONCLUSIONS
These findings underscore the effectiveness and superior performance of prompt learning and fine-tuning for subtasks within pre-trained language models in medical practice. Our real-time ICD coding pipeline effectively extracts detailed medical free-text into standardized labels, with potential applications in clinical decision-making.