Abstract
AbstractThe purpose of this paper is to provide a review of the literature on the original disruption index (DI1) and its variants in scientometrics. The DI1 has received much media attention and prompted a public debate about science policy implications, since a study published in Nature found that papers in all disciplines and patents are becoming less disruptive over time. This review explains in the first part the DI1 and its variants in detail by examining their technical and theoretical properties. The remaining parts of the review are devoted to studies that examine the validity and the limitations of the indices. Particular focus is placed on (1) possible biases that affect disruption indices (2) the convergent and predictive validity of disruption scores, and (3) the comparative performance of the DI1 and its variants. The review shows that, while the literature on convergent validity is not entirely conclusive, it is clear that some modified index variants, in particular DI5, show higher degrees of convergent validity than DI1. The literature draws attention to the fact that (some) disruption indices suffer from inconsistency, time-sensitive biases, and several data-induced biases. The limitations of disruption indices are highlighted and best practice guidelines are provided. The review encourages users of the index to inform about the variety of DI1 variants and to apply the most appropriate variant. More research on the validity of disruption scores as well as a more precise understanding of disruption as a theoretical construct is needed before the indices can be used in the research evaluation practice.
Funder
Generalverwaltung der Max-Planck-Gesellschaft
Publisher
Springer Science and Business Media LLC
Subject
Library and Information Sciences,Computer Science Applications,General Social Sciences
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献