Abstract
As arbiters of law and fact, judges are supposed to decide cases impartially, basing their decisions on authoritative legal sources and not being influenced by irrelevant factors. Empirical evidence, however, shows that judges are often influenced by implicit biases, which can affect the impartiality of their judgment and pose a threat to the right to a fair trial. In recent years, artificial intelligence (AI) has been increasingly used for a variety of applications in the public domain, often with the promise of being more accurate and objective than biased human decision-makers. Given this backdrop, this research article identifies how AI is being deployed by courts, mainly as decision-support tools for judges. It assesses the potential and limitations of these tools, focusing on their use for risk assessment. Further, the article shows how AI can be used as a debiasing tool, i. e., to detect patterns of bias in judicial decisions, allowing for corrective measures to be taken. Finally, it assesses the mechanisms and benefits of such use.
Reference40 articles.
1. Allhutter, Doris; Cech, Florian; Fischer, Fabian; Grill, Gabriel; Mager, Astrid (2020): Algorithmic profiling of job seekers in Austria. How austerity politics are made effective. In: Frontiers in Big Data 3 (5), pp. 1–17. https://doi.org/10.3389/fdata.2020.00005
2. Angwin, Julia; Larson, Jeff; Mattu, Surya; Kirchner, Lauren (2016): Machine bias. There’s software used across the country to predict future criminals. And it’s biased against blacks. In: ProPublica, 23. 05. 2016. Available online at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, last accessed on 22. 01. 2024.
3. Arnold, David; Dobbie, Will; Hull, Peter (2020): Measuring racial discrimination in bail decisions. In: NBER Working Paper Series, pp. 1–84. https://doi.org/10.3386/w26999
4. Barabas, Chelsea; Virza, Madars; Dinakar, Karthik; Ito, Joichi; Zittrain, Jonathan (2018): Interventions over predictions. Reframing the ethical debate for actuarial risk assessment. In: Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81, pp. 62–76. Available online at https://proceedings.mlr.press/v81/barabas18a.html, last accessed on 22. 01. 2024
5. Bielen, Samantha; Marneffe, Wim; Mocan, Naci (2021): Racial bias and in-group bias in virtual reality courtrooms. In: The Journal of Law and Economics 64 (2), pp. 269–300. https://doi.org/10.1086/712421