Affiliation:
1. Medical Big Data Research Center, Chinese PLA General Hospital, China
2. School of Computer Science & Technology, Beijing Institute of Technology, China
Abstract
Ever-growing electronic medical corpora provide unprecedented opportunities for researchers to analyze patient conditions and drug effects. Meanwhile, severe challenges emerged in the large-scale electronic medical records process phase. Primarily, emerging words for medical terms, including informal descriptions, are difficult to recognize. Moreover, although deep models can help in entity extraction on medical texts, it requires large-scale labels which are time-intensive to obtain and not always available in the medical domain. However, when encountering a situation where massive unseen concepts appear, or labeled data is insufficient, the performance of existing algorithms will suffer an intolerable decline. In this paper, we propose a balanced and deep active learning framework (
MedNER
) for Named Entity Recognition in the medical corpus to alleviate above problems. Specifically, to describe our selection strategy precisely, we first define the uncertainty of a medical sentence as a labeling loss predicted by a loss-prediction module and define diversity as the least text distance between pairs of sentences in a sample batch computed based on word-morpheme embeddings. Furthermore, aiming to make a trade-off between uncertainty and diversity, we formulate a
Distinct-K
optimization problem to maximize the slightest uncertainty and diversity of chosen sentences. Finally, we propose a threshold-based approximation selection algorithm,
Distinct-K Filter
, which selects the most beneficial training samples by balancing diversity and uncertainty. Extensive experimental results on real datasets demonstrate that
MedNER
significantly outperforms existing approaches.
Publisher
Association for Computing Machinery (ACM)
Reference45 articles.
1. Dzmitry Bahdanau Kyunghyun Cho and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR.
2. William H. Beluch, Tim Genewein, Andreas Nürnberger, and Jan M. Köhler. 2018. The Power of Ensembles for Active Learning in Image Classification. In CVPR. 9368–9377.
3. Enriching Word Vectors with Subword Information
4. Active Batch Selection via Convex Relaxations with Guaranteed Solution Bounds
5. Xinxiong Chen Lei Xu Zhiyuan Liu Maosong Sun and Huan-Bo Luan. 2015. Joint Learning of Character and Word Embeddings. In IJCAI. 1236–1242.