Abstract
AbstractArtificial intelligence (AI) is rapidly expanding in myriad industries and systems. This study sought to investigate public trust in using AI in the criminal court process. While previous research has identified factors that influence trust in AI, such as perceived accuracy and transparency of algorithms, less is known about the role of influential leaders—such as judges—in shaping public trust in new technology. This study examined the relationship between locus of control, anthropomorphism, cultural values, and perceived trust in AI. Participants completed a survey assessing their perceptions of trust in AI in determining bail eligibility, bail fines and fees, sentencing length, sentencing fines and fees, and writing legal documents (e.g., findings and disposition). Participants were more likely to trust AI performing financial calculations rather than determining bail eligibility, sentence length, or drafting legal documents. Participants’ comfort with AI in decision-making also depended on their perceptions of judges’ trust in AI, and they expressed concerns about AI perpetuating bias and the need for extensive testing to ensure accuracy. Interestingly, no significant association was found with other participant characteristics (e.g., locus of control, anthropomorphism, or cultural values). This study contributes to the literature by highlighting the role of judges as influential leaders in shaping public trust in AI and examining the influence of individual differences on trust in AI. The findings also help inform the development of recommended practices and ethical guidelines for the responsible use of AI in the courts.
Publisher
Springer Science and Business Media LLC
Reference95 articles.
1. Ahluwalia SC, Edelen MO, Qureshi N, Etchegaray JM. Trust in experts, not trust in national leadership, leads to greater uptake of recommended actions during the COVID-19 pandemic. Risk Hazards Crisis Public Policy. 2021;12(3):283–302. https://doi.org/10.1002/rhc3.12219.
2. Angwin J, Larson J, Mattu S, Kirchner L. Machine bias. In: Ethics of data and analytics. Boca Raton: Auerbach Publications; 2016. p. 254–64.
3. Araujo T. Living up to the chatbot hype: the influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput Hum Behav. 2018;85:183–9. https://doi.org/10.1016/j.chb.2018.03.051.
4. Antinucci M. EU Ethical Charter on the use of artificial intelligence in judicial systems with a part of the law being established on blockchain as a Trojan horse anti-counterfeiting in a global perspective. In: Courier of Kutafin Moscow State Law University (MSAL). 2020; 2: 36–42. https://doi.org/10.17803/2311-5998.2020.66.2.036-042.
5. Barabas, Dinakar K, Ito J, Virza M, Zittrain J. Interventions over predictions: reframing the ethical debate for actuarial risk assessment. arXiv.org. 2018.