Affiliation:
1. Toulouse School of Economics , Institute for Advanced Study in Toulouse, University of Toulouse Capitole , Toulouse , 31015 , France
Abstract
Abstract
The usual narrative is backlash to artificial intelligence (AI). A recent study found that when judges were given decision-support, it ended up increasing disparities–not because the algorithm was biased–in fact the algorithm would have resulted in lower disparities. But the judges selectively paid attention to the algorithm, which resulted in greater disparities. This article argues for an incremental approach leveraging recent theoretical insights from social preference economics. The core insight is that judges are moral decision-makers–you’re right or wrong, good or bad–and to understand what motivates these decision-makers, one might turn to self-image motives–a topic of active behavioral research in recent years. Each stage leverages motives related to the self: self-image, self-improvement, self-understanding, and ego. In stage 1, people use AI as a support tool, speeding up existing processes (for example, by prefilling forms). Once they’re used to this, they can more easily accept an added functionality (Stage 2) in which AI becomes a choice monitor, pointing out choice inconsistencies and reminding the human of her prior choices in similar situations. Stage 3 elevates the AI to the role of a more general coach, providing outcome feedback on choices and highlighting decision patterns. Then, in stage 4, the AI brings in other people’s decision histories and patterns, serving as a platform for a community of experts. This framework contrasts with the current framework where the AI simply recommends an optimal decision.
Subject
Law,Economics, Econometrics and Finance (miscellaneous)
Reference19 articles.
1. Albright, A. 2019. “If You Give a Judge a Risk Score: Evidence from Kentucky Bail Decisions.” In The John M. Olin Center for Law, Economics, and Business Fellows’ Discussion Paper Series, 85. Cambridge, MA: Harvard Law School.
2. Amaranto, D., A. Elliott, D. L. Chen, L. Ren, and C. Roper. 2017. Algorithms As Prosecutors: Lowering Rearrest Rates without Disparate Impacts and Identifying Defendant Characteristics Ânoisyâto Human Decision-Makers.
3. Ash, E., and D. Chen. 2020. Judicial Inattention: Machine Prediction of Appeal Success in U.S. Asylum Courts.
4. Bénabou, R., and T. Jean. 2011. “Identity, Morals, and Taboos: Beliefs as Assets.” The Quarterly Journal of Economics 126 (2): 805–55, https://doi.org/10.1093/qje/qjr002. http://www.jstor.org/stable/23015689. ISSN 00335533.
5. Camerer, C. 1981. “General Conditions for the Success of Bootstrapping Models.” Organizational Behavior & Human Performance 27 (3): 411–22. https://doi.org/10.1016/0030-5073(81)90031-3.