Affiliation:
1. Institute of Service Science National Tsing Hua University , Hsinchu , Taiwan
2. Institute of Law for Science and Technology National Tsing Hua University , Hsinchu , Taiwan
3. Department of Business Law and Taxation Monash University , Clayton, Victoria , Australia
Abstract
Abstract
Big data and algorithmic risk prediction tools promise to improve criminal justice systems by reducing human biases and inconsistencies in decision-making. Yet different, equally justifiable choices when developing, testing and deploying these socio-technical tools can lead to disparate predicted risk scores for the same individual. Synthesising diverse perspectives from machine learning, statistics, sociology, criminology, law, philosophy and economics, we conceptualise this phenomenon as predictive inconsistency. We describe sources of predictive inconsistency at different stages of algorithmic risk assessment tool development and deployment and consider how future technological developments may amplify predictive inconsistency. We argue, however, that in a diverse and pluralistic society we should not expect to completely eliminate predictive inconsistency. Instead, to bolster the legal, political and scientific legitimacy of algorithmic risk prediction tools, we propose identifying and documenting relevant and reasonable ‘forking paths’ to enable quantifiable, reproducible multiverse and specification curve analyses of predictive inconsistency at the individual level.
Funder
Taiwan National Science and Technology Council
Publisher
Oxford University Press (OUP)
Subject
Statistics, Probability and Uncertainty,Economics and Econometrics,Social Sciences (miscellaneous),Statistics and Probability
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Persons and Personalization on Digital Platforms;Advances in Human and Social Aspects of Technology;2023-10-16