Abstract
We plan to empirically study the assessment of scientific papers within the framework of the anchoring-and-adjustment heuristic. This study is a follow-up study which is intended to answer open questions from the previous study with the same topic Bornmann (2021) and Bornmann (2023). The previous and follow-up studies address a central question in research evaluation: does bibliometrics create the social order in science it is designed to measure or does bibliometrics reflect the given social order (which is dependent on the intrinsic quality of research)? If bibliometrics creates the social order, it can be interpreted as an anchoring-and-adjustment heuristic. In the planned study, we shall undertake a survey of corresponding authors with an available email address in the Web of Science database. The authors are asked to assess the quality of articles that they cited in previous papers. The authors are randomly assigned to different experimental settings in which they receive (or not) citation information or a numerical access code to enter the survey. The control group will not receive any further numerical information. In the statistical analyses, we estimate how (strongly) the quality assessments of the cited papers are adjusted by the respondents to the anchor value (citation counts or access code). Thus, we are interested in whether possible adjustments in the assessments can not only be produced by quality-related information (citation counts), but also by numbers that are not related to quality, i.e. the access code. Strong effects of the anchors would mean that bibliometrics (or any other number) create the social order they are supposed to measure.
Publisher
Public Library of Science (PLoS)