Abstract
AbstractPercentiles are statistics pointing to the standing of a paper’s citation impact relative to other papers in a given citation distribution. Percentile Ranks (PRs) often play an important role in evaluating the impact of researchers, institutions, and similar lines of study. Because PRs are so important for the assessment of scholarly impact, and because citations differ greatly across time and fields, various percentile approaches have been proposed to time- and field-normalize citations. Unfortunately, current popular methods often face significant problems in time- and field-normalization, including when papers are assigned to multiple fields or have been published by more than one unit (e.g., researchers or countries). They also face problems for estimating citation counts for pre-defined PRs (e.g., the 90th PR). We offer a series of guidelines and procedures that, we argue, address these problems and others and provide a superior means to make the use of percentile methods more accurate and informative. In particular, we introduce two approaches, CP-IN and CP-EX, that should be preferred in bibliometric studies because they consider the complete citation distribution and can be accurately interpreted. Both approaches are based on cumulative frequencies in percentages (CPs). The paper further shows how bar graphs and beamplots can present PRs in a more meaningful and accurate manner.
Publisher
Springer Science and Business Media LLC
Subject
Library and Information Sciences,Computer Science Applications,General Social Sciences
Reference36 articles.
1. Adams, J., McVeigh, M., Pendlebury, D., & Szomszor, M. (2019). Profiles, not metrics. Philadelphia, PA: Clarivate Analytics.
2. Barrett, P. (2003). Percentiles and textbook definitions—Confused or what? Retrieved November 11, 2019, from https://www.pbarrett.net/techpapers/percentiles.pdf.
3. Bornmann, L. (2013). How to analyze percentile citation impact data meaningfully in bibliometrics: The statistical analysis of distributions, percentile rank classes, and top-cited papers. Journal of the American Society for Information Science and Technology,64(3), 587–595. https://doi.org/10.1002/asi.22792.
4. Bornmann, L. (2014). How are excellent (highly cited) papers defined in bibliometrics? A quantitative analysis of the literature. Research Evaluation,23(2), 166–173. https://doi.org/10.1093/reseval/rvu002.
5. Bornmann, L., de Moya Anegón, F., & Leydesdorff, L. (2012). The new excellence indicator in the world report of the SCImago institutions rankings 2011. Journal of Informetrics,6(2), 333–335. https://doi.org/10.1016/j.joi.2011.11.006.
Cited by
28 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献