Abstract
AbstractOur goal in this paper is to establish a set of criteria for understanding the meaning and sources of attributing (un)fairness to AI algorithms. To do so, we first establish that (un)fairness, like other normative notions, can be understood in a proper primary sense and in secondary senses derived by analogy. We argue that AI algorithms cannot be said to be (un)fair in the proper sense due to a set of criteria related to normativity and agency. However, we demonstrate how and why AI algorithms can be qualified as (un)fair by analogy and explore the sources of this (un)fairness and the associated problems of responsibility assignment. We conclude that more user-driven AI approaches could alleviate some of these difficulties.
Funder
HORIZON EUROPE European Research Council
Swiss Federal Institute of Technology Zurich
Publisher
Springer Science and Business Media LLC
Reference52 articles.
1. Aristotle. (1998). Metaphysics, books g, d and e (C. Kirwin, Trans.). Clarendon Press.
2. Arp, R., & Smith, B. (2008). Function, role and disposition in basic formal ontology. In: Proceedings of bio-ontologies workshop, intelligent systems for molecular biology (ISMB), Toronto (pp. 45–48).
3. Baird, A., & Maruping, L. M. (2021). The next generation of research on is use: A theoretical framework of delegation to and from agentic is artifacts. MIS Quarterly, 45, 315–341.
4. Bellamy, R.K., Dey, K., Hind, M., Hoffman, S.C., Houde, S., Kannan, K., et al. (2018). Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv:1810.01943
5. Bengio, Y., Louradour, J., Collobert, R., & Weston, J. (2009). Curriculum learning. Proceedings of the 26th annual international conference on machine learning (pp. 41–48).