Abstract
Predictive analytics and artificial intelligence are applied widely across law enforcement agencies and the criminal justice system. Despite criticism that such tools reinforce inequality and structural discrimination, proponents insist that they will nonetheless improve the equality and fairness of outcomes by countering humans’ biased or capricious decision-making. How can predictive analytics be understood simultaneously as a source of, and solution to, discrimination and bias in criminal justice and law enforcement? The article provides a framework for understanding the techno-political gambit of predictive policing as a mechanism of police reform—a discourse that I call “predictive policing for reform.” Focusing specifically on geospatial predictive policing systems, I argue that “predictive policing for reform” should be seen as a flawed attempt to rationalize police patrols through an algorithmic remediation of patrol geographies. The attempt is flawed because predictive systems operate on the sociotechnical practices of police patrols, which are themselves contradictory enactments of the state’s power to distribute safety and harm. The ambiguities and contradictions of the patrol are not resolved through algorithmic remediation. Instead, they lead to new indeterminacies, trade-offs, and experimentations based on unfalsifiable claims. I detail these through a discussion of predictive policing firm HunchLab’s use of predictive analytics to rationalize patrols and mitigate bias. Understanding how the “predictive policing for reform” discourse is operationalized as a series of technical fixes that rely on the production of indeterminacies allows for a more nuanced critique of predictive policing.
Publisher
Queen's University Library
Subject
Urban Studies,Safety Research
Cited by
39 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献