Affiliation:
1. MIT Computer Science and Artificial Intelligence Laboratory, United Stated.
2. Harvard School of Engineering and Applied Sciences Cambridge, MA, USA. belinkov@mit.edu
3. MIT Computer Science and Artificial Intelligence Laboratory, United Stated. glass@mit.edu
Abstract
Abstract
The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their feature-rich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work.
Subject
Artificial Intelligence,Computer Science Applications,Linguistics and Language,Human-Computer Interaction,Communication
Reference193 articles.
1. Yossi
Adi,
Einat
Kermany,
Yonatan
Belinkov,
Ofer
Lavi, and
Yoav
Goldberg.
2017a. Analysis of sentence embedding models using prediction tasks in natural language processing. IBM Journal of Research and Development, 61(4):3–9.
2. Yossi
Adi,
Einat
Kermany,
Yonatan
Belinkov,
Ofer
Lavi, and
Yoav
Goldberg.
2017. Fine-Grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. In International Conference on Learning Representations (ICLR).
3. Roee
Aharoni and
Yoav
Goldberg.
2017. Morphological Inflection Generation with Hard Monotonic Attention. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2004–2015. Association for Computational Linguistics.
4. Wasi Uddin
Ahmad,
Xueying
Bai,
Zhechao
Huang,
Chao
Jiang,
Nanyun
Peng, and
Kai-Wei
Chang.
2018. Multi-task Learning for Universal Sentence Embeddings: A Thorough Evaluation using Transfer and Auxiliary Tasks. arXiv preprint arXiv:1804.07911v2.
5. Afra
Alishahi,
Marie
Barking, and
Grzegorz
Chrupała.
2017. Encoding of phonology in a recurrent neural model of grounded speech. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 368–378. Association for Computational Linguistics.
Cited by
131 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献