Abstract
AbstractWhile normalized bibliometric indicators are expected to resolve the subject-field differences between organizations in research evaluations, the identification of reference organizations working on similar research topics is still of importance. Research organizations, policymakers and research funders tend to use benchmark units as points of comparison for a certain research unit in order to understand and monitor its development and performance. In addition, benchmark organizations can also be used to pinpoint potential collaboration partners or competitors. Therefore, methods for identifying benchmark research units are of practical significance. Even so, few studies have further explored this problem. This study aims to propose a bibliometric approach for the identification of benchmark units. We define an appropriate benchmark as a well-connected research environment, in which researchers investigate similar topics and publish a similar number of publications compared to a given research organization during the same period. Four essential attributes for the evaluation of benchmarks are research topics, output, connectedness, and scientific impact. We apply this strategy to two research organizations in Sweden and examine the effectiveness of the proposed method. Identified benchmark units are evaluated by examining the research similarity and the robustness of various measures of connectivity.
Funder
Royal Institute of Technology
Publisher
Springer Science and Business Media LLC
Subject
Library and Information Sciences,Computer Science Applications,General Social Sciences
Reference48 articles.
1. Amaral, P., & Sousa, R. (2009). Barriers to internal benchmarking initiatives: An empirical investigation. Benchmarking: An International Journal, 16(4), 523–542.
2. Anand, G., & Kodali, R. (2008). Benchmarking the benchmarking models. Benchmarking: An International Journal, 15(3), 257–291.
3. Andersen, J. P., Didegah, F., & Schneider, J. W. (2017). The necessity of comparing like with like in evaluative scientometrics: A first attempt to produce and test a generic approach to identifying relevant benchmark units. In STI conference science and technology indicators conference.
4. Bast, H., & Korzen, C. (2017). A benchmark and evaluation for text extraction from pdf. In 2017 ACM/IEEE joint conference on digital libraries (JCDL).
5. Bi, H. H. (2017). Multi-criterion and multi-period performance benchmarking of products and services: Discovering hidden performance gaps. Benchmarking: An International Journal, 24(4), 934–972.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献