Affiliation:
1. Yale University, New Haven, Connecticut 06520
Abstract
Collective evaluation processes, which offer individuals an opportunity to assess quality, have transcended mainstream sectors (e.g., books, restaurants) to permeate professional contexts from within and across organizations to the gig economy. This paper introduces a theoretical framework to understand how evaluators’ visibility into prior evaluations influences the subsequent evaluation process: the likelihood of evaluating at all and the value of the evaluations that end up being submitted. Central to this discussion are the conditions under which evaluations converge—are more similar to prior evaluations—or diverge—are less similar—as well as the mechanisms driving observed outcomes. Using a quasinatural experiment on a platform where investment professionals submit and evaluate investment recommendations, I compare evaluations that are made with and without the possibility of prior ratings influencing the subsequent evaluation process. I find that when prior ratings are visible, convergence occurs. The visibility of prior evaluations decreases the likelihood that a subsequent evaluation occurs by about 50%, and subsequent evaluations become 54%–63% closer to the visible rating. Further analysis suggests that peer deference is a dominant mechanism driving convergence, and only professionals with specialized expertise resist peer deference. Notably, there is no evidence that initial ratings are related to long-term performance. Thus, in this context, convergence distorts the available quality signal for a recommendation. These findings underscore how the structure of evaluation processes can perpetuate initial stratification, even among professionals with baseline levels of expertise. Supplemental Material: The online appendix is available at https://doi.org/10.1287/orsc.2017.11285 .
Publisher
Institute for Operations Research and the Management Sciences (INFORMS)