Author:
Ryberg Jesper,Petersen Thomas S.
Abstract
Abstract
Predictive accuracy and transparency are generally recognized as ethically desirable features of algorithms at sentencing. However, it is often explicitly or implicitly assumed that there may be a conflict between transparency and accuracy. If an algorithmic tool is made more transparent, this will result in a loss of predictive accuracy and vice versa. The purpose of the present chapter is to discuss the nature of this conflict. More precisely, it is first argued that even if there is a conflict between transparency and accuracy, this does not demonstrate the conflict to be of genuine ethical significance. Second, even when there is a genuine ethical conflict between transparency and accuracy, this may sometimes be resolved in ways other than by engaging in trade-offs. Finally, the chapter discusses the theoretical and practical implications of these conclusions.
Publisher
Oxford University PressNew York
Reference29 articles.
1. Learning Certifiably Optimal Rule Lists for Categorical Data.;Journal of Machine Learning Research,2018
2. Bagaric, M. and D. Hunter 2021. “Enhancing the Integrity of the Sentencing Process through the Use of Artificial Intelligence.” In Sentencing and Artificial Intelligence, edited by J. Ryberg and J. V. Roberts. New York: Oxford University Press.
3. Chiao, V. 2021. “Transparency at Sentencing: Are Human Judges More Transparent than Algorithms?” In Sentencing and Artificial Intelligence, edited by J. Ryberg and J. V. Roberts. New York: Oxford University Press.
4. The Judicial Demand for Explainable Artificial intelligence.;Columbia Law Review,2019
5. 72C4.P32Doshi-Velez, F. et al. 2017. “The Role of Explanation in Algorithmic Trust.” online: https://www.semanticscholar.org/paper/The-Role-of-Explanation-in-Algorithmic-Trust-%E2%88%97-Ryan-Doshi-Velez-Budish/6718a458f18e1889385dbf6aaa79236 def01465a?p2df.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献