Affiliation:
1. Amsterdam School of Communication Research (ASCoR) , University of Amsterdam , PB 15793 , Amsterdam , the Netherlands
2. John Glenn College of Public Affairs , Ohio State University , Columbus , USA
3. School of Information Management , Wuhan University , Wuhan , China
Abstract
Abstract
Purpose
Building on Leydesdorff, Bornmann, and Mingers (2019), we elaborate the differences between Tsinghua and Zhejiang University as an empirical example. We address the question of whether differences are statistically significant in the rankings of Chinese universities. We propose methods for measuring statistical significance among different universities within or among countries.
Design/methodology/approach
Based on z-testing and overlapping confidence intervals, and using data about 205 Chinese universities included in the Leiden Rankings 2020, we argue that three main groups of Chinese research universities can be distinguished (low, middle, and high).
Findings
When the sample of 205 Chinese universities is merged with the 197 US universities included in Leiden Rankings 2020, the results similarly indicate three main groups: low, middle, and high. Using this data (Leiden Rankings and Web of Science), the z-scores of the Chinese universities are significantly below those of the US universities albeit with some overlap.
Research limitations
We show empirically that differences in ranking may be due to changes in the data, the models, or the modeling effects on the data. The scientometric groupings are not always stable when we use different methods.
Practical implications
Differences among universities can be tested for their statistical significance. The statistics relativize the values of decimals in the rankings. One can operate with a scheme of low/middle/high in policy debates and leave the more fine-grained rankings of individual universities to operational management and local settings.
Originality/value
In the discussion about the rankings of universities, the question of whether differences are statistically significant, has, in our opinion, insufficiently been addressed in research evaluations.
Reference33 articles.
1. Alsmadi, I., & Taylor, Z. (2018). Examining university ranking metrics: Articulating issues of size and web dependency. In Proceedings of the 2018 International Conference on Computing and Big Data (ICCBD ’18). Association for Computing Machinery, New York, NY, USA, 73–77. doi: https://doi.org/10.1145/3277104.3277111
2. Anderson, J., Collins, P.M.D., Irvine, J., Isard, P.A., Martin, B.R., Narin, F., & Stevens, K. (1988). On-line approaches to measuring national scientific output: A cautionary tale. Science and Public Policy 15(3), 153–161.
3. Bornmann, L., & Leydesdorff, L. (2013). The validation of (advanced) bibliometric indicators through peer assessments: A comparative study using data from InCites and F1000. Journal of Informetrics, 7(2), 286–291.
4. Bornmann, L., & Mutz, R. (2011). Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization. Journal of Informetrics, 5(1), 228–230.
5. Bourdieu, P. (1998). The state nobility: Elite schools in the field of power. Stanford University Press.
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献