Affiliation:
1. School of Mathematics and Statistics, The University of Sydney, Sydney, NSW 2006, Australia
Abstract
Quantifying the dissimilarity of two texts is an important aspect of a number of natural language processing tasks, including semantic information retrieval, topic classification, and document clustering. In this paper, we compared the properties and performance of different dissimilarity measures D using three different representations of texts—vocabularies, word frequency distributions, and vector embeddings—and three simple tasks—clustering texts by author, subject, and time period. Using the Project Gutenberg database, we found that the generalised Jensen–Shannon divergence applied to word frequencies performed strongly across all tasks, that D’s based on vector embedding representations led to stronger performance for smaller texts, and that the optimal choice of approach was ultimately task-dependent. We also investigated, both analytically and numerically, the behaviour of the different D’s when the two texts varied in length by a factor h. We demonstrated that the (natural) estimator of the Jaccard distance between vocabularies was inconsistent and computed explicitly the h-dependency of the bias of the estimator of the generalised Jensen–Shannon divergence applied to word frequencies. We also found numerically that the Jensen–Shannon divergence and embedding-based approaches were robust to changes in h, while the Jaccard distance was not.
Reference53 articles.
1. Pham, H., Luong, M.T., and Manning, C.D. (2015, January 5). Learning distributed representations for multilingual text sequences. Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, Denver, CO, USA.
2. Landauer, T.K., McNamara, D.S., Dennis, S., and Kintsch, W. (2007). Handbook of Latent Semantic Analysis, Psychology Press.
3. Jiang, N., and de Marneffe, M.C. (August, January 28). Do you know that Florence is packed with visitors? Evaluating state-of-the-art models of speaker commitment. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy.
4. Wang, Q., Li, B., Xiao, T., Zhu, J., Li, C., Wong, D.F., and Chao, L.S. (2019). Learning deep transformer models for machine translation. arXiv.
5. Taghva, K., and Veni, R. (2010, January 12–14). Effects of Similarity Metrics on Document Clustering. Proceedings of the Seventh International Conference on Information Technology: New Generations, Las Vegas, NV, USA.