Content Analysis of Judges’ Sentiments Toward Artificial Intelligence Risk Assessment Tools

Author:

Fine A.1,Le S.1,Miller M. K.1

Affiliation:

1. University of Nevada

Abstract

Objective: to analyze the positions of judges on risk assessment tools using artificial intelligence.Methods: dialectical approach to cognition of social phenomena, allowing to analyze them in historical development and functioning in the context of the totality of objective and subjective factors, which predetermined the following research methods: formal-logical and sociological.Results: Artificial intelligence (AI) uses computer programming to make predictions (e.g., bail decisions) and has the potential to benefit the justice system (e.g., save time and reduce bias). This secondary data analysis assessed 381 judges’ responses to the question, “Do you feel that artificial intelligence (using computer programs and algorithms) holds promise to remove bias from bail and sentencing decisions?”Scientific novelty: The authors created apriori themes based on the literature, which included judges’ algorithm aversion and appreciation, locus of control, procedural justice, and legitimacy. Results suggest that judges experience algorithm aversion, have significant concerns about bias being exacerbated by AI, and worry about being replaced by computers. Judges believe that AI has the potential to inform their decisions about bail and sentencing; however, it must be empirically tested and follow guidelines. Using the data gathered about judges’ sentiments toward AI, the authors discuss the integration of AI into the legal system and future research.Practical significance: the main provisions and conclusions of the article can be used in scientific, pedagogical and law enforcement activities when considering the issues related to the legal risks of using artificial intelligence.

Publisher

Kazan Innovative University named after V. G. Timiryasov

Reference74 articles.

1. Andrews, D. A., & Bonta, J. (2010). The psychology of criminal conduct. Routledge.

2. Andrews, P. (2022, October 13). Designing for legitimacy. Apolitical. https://apolitical.co/solution-articles/en/designing-for-legitimacy

3. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. In K. Martin (Ed.), Ethics of data and analytics (pp. 254–264). Auerbach Publications.

4. Audette, A. P., & Weaver, C. L. (2015). Faith in the court: Religious out-groups and the perceived legitimacy of judicial decisions. Law & Society Review, 49(4), 999–1022. https://doi.org/10.1111/lasr.12167

5. Barabas, C., Virza, M., Dinakar, K., Ito, J., & Zittrain, J. (2018, January). Interventions over predictions: Reframing the ethical debate for actuarial risk assessment. Proceedings of Machine Learning Research, 81, 62–76. https://proceedings.mlr.press/v81/barabas18a.html

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3