BACKGROUND
Named entity recognition (NER) is critical for extracting medical entities from healthcare texts, enabling key applications in clinical decision support and data mining. However, developing NER models for low-resource languages like Estonian is challenging due to limited annotated data and pre-trained models. Large Language Models (LLM) have proven to be promising in understanding text from any language or domain.
OBJECTIVE
This paper aims to address the challenge of developing high-quality medical NER models for low-resource languages like Estonian. The objective is to overcome this limitation by leveraging synthetic Estonian healthcare data annotated with LLMs. The focus is on training an effective NER model on synthetic data for downstream tasks and using it on real-world, highly sensitive medical data.
METHODS
To tackle the scarcity of annotated data in Estonian healthcare texts, we employ a novel three step approach. First, synthetic Estonian healthcare data is generated using a locally trained model. Second, the data is annotated using LLMs. Finally, the annotated synthetic data is used to fine-tune a NER model. This paper compares the performance of different prompts, assesses the impact of GPT-3.5-Turbo, GPT-4 and a local LLM and explores the relationship between the amount of annotated synthetic data and model performance.
RESULTS
Our approach yields promising results in the extraction of named entities from real-world medical texts. Specifically, our best setup achieves an F1 score of 0.757 for extracting drugs and an F1 score of 0.395 for extracting procedures.
CONCLUSIONS
In this paper, we show the results of leveraging LLMs for training NER models without risking the privacy of the sensitive medical data by using synthetic texts. These results are achieved without relying on real human-annotated data, highlighting the effectiveness of our methodology.