The statistical fairness field guide: perspectives from social and formal sciences
-
Published:2022-06-25
Issue:1
Volume:3
Page:1-23
-
ISSN:2730-5953
-
Container-title:AI and Ethics
-
language:en
-
Short-container-title:AI Ethics
Author:
Carey Alycia N.ORCID, Wu Xintao
Abstract
AbstractOver the past several years, a multitude of methods to measure the fairness of a machine learning model have been proposed. However, despite the growing number of publications and implementations, there is still a critical lack of literature that explains the interplay of fair machine learning with the social sciences of philosophy, sociology, and law. We hope to remedy this issue by accumulating and expounding upon the thoughts and discussions of fair machine learning produced by both social and formal (i.e., machine learning and statistics) sciences in this field guide. Specifically, in addition to giving the mathematical and algorithmic backgrounds of several popular statistics-based fair machine learning metrics used in fair machine learning, we explain the underlying philosophical and legal thoughts that support them. Furthermore, we explore several criticisms of the current approaches to fair machine learning from sociological, philosophical, and legal viewpoints. It is our hope that this field guide helps machine learning practitioners identify and remediate cases where algorithms violate human rights and values.
Funder
National Science Foundation
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences
Reference94 articles.
1. Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S.: Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464), 447–453 (2019) 2. Dieleman, J.L., Chen, C., Crosby, S.W., Liu, A., McCracken, D., Pollock, I.A., Sahu, M., Tsakalos, G., Dwyer-Lindgren, L., Haakenstad, A., Mokdad, A.H., Roth, G.A., Scott, K.W., Murray, C.J.L.: Us health care spending by race and ethnicity, 2002–2016. JAMA 326(7), 649–659 (2021) 3. Ledford, H.: Millions of black people affected by racial bias in health-care algorithms. Nature 574(7780), 608–609 (2019) 4. Datta, A., Tschantz, M.C., Datta, A.: Automated experiments on ad privacy settings: a tale of opacity, choice, and discrimination. arxiv.1408.6491 [cs] (2015) 5. Buolamwini, J., Gebru, T.: Gender shades: Intersectional accuracy disparities in commercial gender classification. In: Proceedings of the 1st Conference on Fairness, Accountability and Transparency, pp. 77–91. PMLR, New York (2018). ISSN:2640–3498
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|