Abstract
AbstractThe Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires ex post solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions and offer two main contributions. One is constructive: we develop a theoretical framework to classify these approaches according to their relevance for bias as evidence of social disparities. We draw on Pearl’s ladder of causation (Causality: models, reasoning, and inference. Cambridge University Press, Cambridge, 2000, Causality, 2nd edn. Cambridge University Press, Cambridge, 2009. https://doi.org/10.1017/CBO9780511803161) to order these XAI approaches concerning their ability to answer fairness-relevant questions and identify fairness-relevant solutions. The other contribution is critical: we evaluate these approaches in terms of their assumptions about the role of protected characteristics in discriminatory outcomes. We achieve this by building on Kohler-Hausmann’s (Northwest Univ Law Rev 113(5):1163–1227, 2019) constructivist theory of discrimination. We derive three recommendations for XAI practitioners to develop and AI policymakers to regulate tools that address algorithmic bias in its conditions and hence mitigate its future occurrence.
Funder
Centre for Digital Ethics, Bologna University
Publisher
Springer Science and Business Media LLC
Reference54 articles.
1. Aas, K., Jullum, M., & Løland, A. (2021). Explaining individual predictions when features are dependent: More accurate approximations to Shapley values. Artificial Intelligence, 298, 103502. https://doi.org/10.1016/j.artint.2021.103502
2. ACLU California Action. (2020). AB 256. ACLU California Action. https://aclucalaction.org/bill/ab-256/
3. Abdollahi, B., & Nasraoui, O. (2018). Transparency in fair machine learning: The case of explainable recommender systems. In J. Zhou & F. Chen (Eds.), Human and machine learning: Visible, explainable, trustworthy and transparent (pp. 21–35). Springer. https://doi.org/10.1007/978-3-319-90403-0_2
4. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
5. Agyeman, J. (2021, March 9). How urban planning and housing policy helped create ‘food apartheid’ in US cities. The Conversation. http://theconversation.com/how-urban-planning-and-housing-policy-helped-create-food-apartheid-in-us-cities-154433