Author:
Rosenthal-von der Pütten Astrid Marieke,Sach Alexandra
Abstract
IntroductionArtificial intelligence algorithms are increasingly adopted as decisional aides in many contexts such as human resources, often with the promise of being fast, efficient, and even capable of overcoming biases of human decision-makers. Simultaneously, this promise of objectivity and the increasing supervisory role of humans may make it more likely for existing biases in algorithms to be overlooked, as humans are prone to over-rely on such automated systems. This study therefore aims to investigate such reliance on biased algorithmic advice in a hiring context.MethodSimulating the algorithmic pre-selection of applicants we confronted participants with biased or non-biased recommendations in a 1 × 2 between-subjects online experiment (n = 260).ResultsThe findings suggest that the algorithmic bias went unnoticed for about 60% of the participants in the bias condition when explicitly asking for this. However, overall individuals relied less on biased algorithms making more changes to the algorithmic scores. Reduced reliance on the algorithms led to the increased noticing of the bias. The biased recommendations did not lower general attitudes toward algorithms but only evaluations for this specific hiring algorithm, while explicitly noticing the bias affected both. Individuals with a more negative attitude toward decision subjects were more likely to not notice the bias.DiscussionThis study extends the literature by examining the interplay of (biased) human operators and biased algorithmic decision support systems to highlight the potential negative impacts of such automation for vulnerable and disadvantaged individuals.