Named Entity Recognition with Word Embeddings and Wikipedia Categories for a Low-Resource Language

Author:

Das Arjun1,Ganguly Debasis2,Garain Utpal3

Affiliation:

1. University of Calcutta, Kolkata, India

2. Dublin City University, Dublin, Ireland

3. Indian Statistical Institute, Kolkata, India

Abstract

In this article, we propose a word embedding--based named entity recognition (NER) approach. NER is commonly approached as a sequence labeling task with the application of methods such as conditional random field (CRF). However, for low-resource languages without the presence of sufficiently large training data, methods such as CRF do not perform well. In our work, we make use of the proximity of the vector embeddings of words to approach the NER problem. The hypothesis is that word vectors belonging to the same name category, such as a person’s name, occur in close vicinity in the abstract vector space of the embedded words. Assuming that this clustering hypothesis is true, we apply a standard classification approach on the vectors of words to learn a decision boundary between the NER classes. Our NER experiments are conducted on a morphologically rich and low-resource language, namely Bengali. Our approach significantly outperforms standard baseline CRF approaches that use cluster labels of word embeddings and gazetteers constructed from Wikipedia. Further, we propose an unsupervised approach (that uses an automatically created named entity (NE) gazetteer from Wikipedia in the absence of training data). For a low-resource language, the word vectors obtained from Wikipedia are not sufficient to train a classifier. As a result, we propose to make use of the distance measure between the vector embeddings of words to expand the set of Wikipedia training examples with additional NEs extracted from a monolingual corpus that yield significant improvement in the unsupervised NER performance. In fact, our expansion method performs better than the traditional CRF-based (supervised) approach (i.e., F-score of 65.4% vs. 64.2%). Finally, we compare our proposed approach to the official submission for the IJCNLP-2008 Bengali NER shared task and achieve an overall improvement of F-score 11.26% with respect to the best official system.

Funder

ADAPT Centre at DCU

Science Foundation Ireland

Indian Statistical Institute, Kolkata, India

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science

Reference35 articles.

1. Andrew Eliot Borthwick. 1999. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. Dissertation. New York University New York NY. Andrew Eliot Borthwick. 1999. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. Dissertation. New York University New York NY.

2. A unified architecture for natural language processing

3. Ronan Collobert Jason Weston Léon Bottou Michael Karlen Koray Kavukcuoglu and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 2493--2537. http://dl.acm.org/citation.cfm?id=1953048.2078186. Ronan Collobert Jason Weston Léon Bottou Michael Karlen Koray Kavukcuoglu and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 2493--2537. http://dl.acm.org/citation.cfm?id=1953048.2078186.

Cited by 31 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3