Abstract
AbstractThe problem of algorithmic fairness is typically framed as the problem of finding a unique formal criterion that guarantees that a given algorithmic decision-making procedure is morally permissible. In this paper, I argue that this is conceptually misguided and that we should replace the problem with two sub-problems. If we examine how most state-of-the-art machine learning systems work, we notice that there are two distinct stages in the decision-making process. First, a prediction of a relevant property is made. Secondly, a decision is taken based (at least partly) on this prediction. These two stages have different aims: the prediction is aimed at accuracy, while the decision is aimed at allocating a given good in a way that maximizes some context-relative utility measure. Correspondingly, two different fairness issues can arise. First, predictions could be biased in discriminatory ways. This means that the predictions contain systematic errors for a specific group of individuals. Secondly, the system’s decisions could result in an allocation of goods that is in tension with the principles of distributive justice. These two fairness issues are distinct problems that require different types of solutions. I here provide a formal framework to address both issues and argue that this way of conceptualizing them resolves some of the paradoxes present in the discussion of algorithmic fairness.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Philosophy
Reference38 articles.
1. Alexander, L. (1992). What makes wrongful discrimination wrong? Biases, preferences, stereotypes, and proxies. University of Pennsylvania Law Review, 141(1), 149–219.
2. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica.
3. Bickel, P. J., Hammel, E. A., & O’Connell, J. W. (1975). Sex bias in graduate admissions: Data from Berkeley. Science, 187(4175), 398–404.
4. Binns, R. (2020). On the apparent conflict between individual and group fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, (pp. 514-524).
5. Bradley, R. (2017). Decision theory with a human face. Cambridge University Press.
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献