Affiliation:
1. Australian National University
Abstract
AbstractOptimism about our ability to enhance societal decision‐making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML‐based decision‐making, there can be moral grounds for the criticism of these predictions themselves. We introduce and defend a theory of predictive justice, according to which differential model performance for systematically disadvantaged groups can be grounds for moral criticism of the model, independently of its downstream effects. As well as helping resolve some urgent disputes around algorithmic fairness, this theory points the way to a novel dimension of epistemic ethics, related to the recently discussed category of doxastic wrong.
Funder
Australian Research Council
Reference79 articles.
1. The use of misclassification costs to learn rule‐based decision support models for cost‐effective hospital admission strategies;Ambrosino R.;Proceedings of the Annual Symposium on Computer Applications in Medical Care,1995
2. What Is the Point of Equality?
3. Angwin J. Larson J. Mattu S. &Kirchner L.(2016 May 23). Machine bias.ProPublica.https://www.propublica.org/article/machine‐bias‐risk‐assessments‐in‐criminal‐sentencing
4. Studying up
5. Can beliefs wrong?;Basu R.;Philosophical Topics,2018
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献