Author:
Hu Shengding,Liu Zhiyuan,Lin Yankai,Sun Maosong
Abstract
AbstractWords are the building blocks of phrases, sentences, and documents. Word representation is thus critical for natural language processing (NLP). In this chapter, we introduce the approaches for word representation learning to show the paradigm shift from symbolic representation to distributed representation. We also describe the valuable efforts in making word representations more informative and interpretable. Finally, we present applications of word representation learning to NLP and interdisciplinary fields, including psychology, social sciences, history, and linguistics.
Publisher
Springer Nature Singapore
Reference78 articles.
1. Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Paşca, and Aitor Soroa. A study on similarity and relatedness using distributional and WordNet-based approaches. In Proceedings of NAACL-HLT, 2009.
2. Ben Athiwaratkun and Andrew Wilson. Multimodal word distributions. In Proceedings of ACL, 2017.
3. Collin F Baker, Charles J Fillmore, and John B Lowe. The berkeley framenet project. In Proceedings of ACL, 1998.
4. Robert Bamler and Stephan Mandt. Dynamic word embeddings via skip-gram filtering. arXiv preprint arXiv:1702.08359, 2017.
5. Arindam Banerjee, Inderjit S Dhillon, Joydeep Ghosh, and Suvrit Sra. Clustering on the unit hypersphere using von Mises-Fisher distributions. Journal of Machine Learning Research, 2005.