Affiliation:
1. University of Pennsylvania, Department of Computer and Information Science. oagarwal@seas.upenn.edu
2. Google Research. yinfeiy@google.com
3. Northeastern University, Khoury College of Computer Sciences. b.wallace@northeastern.edu
4. University of Pennsylvania, Department of Computer and Information Science. nenkova@seas.upenn.edu
Abstract
Abstract
Named entity recognition systems achieve remarkable performance on domains such as English news. It is natural to ask: What are these models actually learning to achieve this? Are they merely memorizing the names themselves? Or are they capable of interpreting the text and inferring the correct entity type from the linguistic context? We examine these questions by contrasting the performance of several variants of architectures for named entity recognition, with some provided only representations of the context as features. We experiment with GloVe-based BiLSTM-CRF as well as BERT. We find that context does influence predictions, but the main factor driving high performance is learning the named tokens themselves. Furthermore, we find that BERT is not always better at recognizing predictive contexts compared to a BiLSTM-CRF model. We enlist human annotators to evaluate the feasibility of inferring entity types from context alone and find that humans are also mostly unable to infer entity types for the majority of examples on which the context-only system made errors. However, there is room for improvement: A system should be able to recognize any named entity in a predictive context correctly and our experiments indicate that current systems may be improved by such capability. Our human study also revealed that systems and humans do not always learn the same contextual clues, and context-only systems are sometimes correct even when humans fail to recognize the entity type from the context. Finally, we find that one issue contributing to model errors is the use of “entangled” representations that encode both contextual and local token information into a single vector, which can obscure clues. Our results suggest that designing models that explicitly operate over representations of local inputs and context, respectively, may in some cases improve performance. In light of these and related findings, we highlight directions for future work.
Subject
Artificial Intelligence,Computer Science Applications,Linguistics and Language,Language and Linguistics
Reference36 articles.
1. Entity-switched data sets: An approach to auditing the in-domain robustness of named entity recognition models;Agarwal,2020
2. Snowball: Extracting relations from large plain-text collections;Agichtein,2000
3. Effective selectional restrictions for unsupervised relation extraction;Akbik,2013
4. Generalisation in named entity recognition: A quantitative analysis;Augenstein;Computer Speech & Language,2017
5. Named entity recognition in Wikipedia;Balasuriya,2009
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献