This chapter addresses recent concerns about “algorithmic bias,” specifically in the context of the criminal justice process. Starting from a recent controversy about the use of “automated risk assessment tools” in criminal sentencing and parole hearings, where evidence suggests that such tools effectively discriminate against minority defendants, this chapter argues that the problem here has nothing in particular to do with algorithm-assisted reasoning, nor is it in any clear sense a case of epistemic bias. Rather, given the data set that we are given to work with, there is reason to think that no improvement to our epistemic routines would deliver significantly better results. Instead, the bias is effectively encoded into the data set itself, via a long history of institutionalized racism. This suggests a different diagnosis of the problem: in deeply divided societies, there may just be no way to simultaneously satisfy our moral ideals and our epistemic ideals.