Abstract
AbstractPolicymakers and funding agencies tend to support scientific work across disciplines, thereby relying on indicators for interdisciplinarity. Recently, text-based quantitative methods have been proposed for the computation of interdisciplinarity that hold promise to have several advantages over the bibliometric approach. In this paper, we provide a systematic analysis of the computation of the text-based Rao index, based on probabilistic topic models, comparing a classical LDA model versus a neural network topic model. We provide a systematic analysis of model parameters that affect the diversity scores and make the interaction between its different components explicit. We present an empirical study on a real data set, upon which we quantify the diversity of the research within several departments of Fraunhofer and Max Planck Society by means of scientific abstracts published in Scopus between 2008 and 2018. Our experiments show that parameter variations, i.e. the choice of the Number of topics, hyper-parameters, and size and balance of the underlying data used for training the model, have a strong effect on the topic model-based Rao metrics. In particular, we could observe that the quality of the topic models impacts on the downstream task of computing the Rao index. Topic models that yield semantically cohesive topics are less affected by fluctuations when varying over the number of topics, and result in more stable measurements of the Rao index.
Funder
Fraunhofer-Institut für System- und Innovationsforschung ISI
Publisher
Springer Science and Business Media LLC
Subject
Library and Information Sciences,Computer Science Applications,General Social Sciences
Reference51 articles.
1. Aletras, N., & Stevenson, M. (2014). Measuring the similarity between automatically generated topics. In: Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers (pp. 22–27)
2. Bache, K., Newman, D., & Smyth, P. (2013). Text-based measures of document diversity. In: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 23–31)
3. Blei, D., Carin, L., & Dunson, D. (2010). Probabilistic topic models. IEEE Signal Processing Magazine, 27(6), 55–65.
4. Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan), 993–1022.
5. Cassi, L., Champeimont, R., Mescheba, W., & De Turckheim, E. (2017). Analysing institutions interdisciplinarity by extensive use of Rao-Stirling diversity index. PLoS ONE, 12(1), e0170296.