BACKGROUND
The International Classification of Diseases (ICD), developed by the WHO, standardizes health condition coding to support healthcare policy, research, and billing, but AI automation, while promising, still underperforms compared to human accuracy and lacks the explainability needed for adoption in medical settings.
OBJECTIVE
The potential of Large Language Models (LLMs) is explored for assisting medical coders in the International Classification of Diseases-10 (ICD-10) coding. The study aims on augmenting human coding by initially identifying lead terms and utilizing RAG-based methods for computer-assisted coding enhancement.
METHODS
The explainability dataset from the CodiEsp challenge (CodiEsp-X) was used, featuring 1000 Spanish clinical cases annotated with ICD-10 codes. From CodiEsp-X, a dataset was created using GPT-4, where full textual evidence annotations were replaced with lead term annotations. Phase 1 consisted of fine-tuning a named entity recognition (NER) RoBERTa transformer model for lead term extraction. In Phase 2, the ICD code for identified lead terms was assigned using GPT-4 and ICD code descriptions, a Retrieval-Augmented Generation (RAG) approach.
RESULTS
The fine-tuned RoBERTa achieved an overall F1 score of 0.80 for ICD lead term extraction on the new CodiEsp-X-lead dataset in phase 1. In phase 2, the GPT-4 generated code descriptions improved the recall from 55.1% to 82.3% for procedure code lookups in a code description database. However, relying solely on GPT-4 prompting and code descriptions as a resource to assign the correct ICD-10 code to identified lead terms resulted in poor performance on the CodiEsp-X task, with an F1 score of 0.305.
CONCLUSIONS
While the fine-tuning on training data might have improved the ICD coding, it was intentionally omitted to prioritize generalizability.