Abstract
AbstractThe presence of decision-making algorithms in society is rapidly increasing nowadays, while concerns about their transparency and the possibility of these algorithms becoming new sources of discrimination are arising. There is a certain consensus about the need to develop AI applications with a Human-Centric approach. Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes. All these four Human-Centric requirements are closely related to each other. With the aim of studying how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, we propose a fictitious case study focused on automated recruitment: FairCVtest. We train automatic recruitment algorithms using a set of multimodal synthetic profiles including image, text, and structured data, which are consciously scored with gender and racial biases. FairCVtest shows the capacity of the Artificial Intelligence (AI) behind automatic recruitment tools built this way (a common practice in many other application scenarios beyond recruitment) to extract sensitive information from unstructured data and exploit it in combination to data biases in undesirable (unfair) ways. We present an overview of recent works developing techniques capable of removing sensitive information and biases from the decision-making process of deep learning architectures, as well as commonly used databases for fairness research in AI. We demonstrate how learning approaches developed to guarantee privacy in latent spaces can lead to unbiased and fair automatic decision-making process. Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
Publisher
Springer Science and Business Media LLC
Subject
Computer Science Applications,Computer Networks and Communications,Computer Graphics and Computer-Aided Design,Computational Theory and Mathematics,Artificial Intelligence,General Computer Science
Reference107 articles.
1. Barocas S, Selbst AD. Big data’s disparate impact. Calif Law Rev. 2016;104:671–732.
2. Acien A, Morales A, Vera-Rodriguez R, Bartolome I, Fierrez J. Measuring the gender and ethnicity bias in deep models for face recognition. In: Proceedings of Iberoamerican Congress on pattern recognition (IbPRIA), Madrid, Spain; 2018.
3. Drozdowski P, Rathgeb C, Dantcheva A, Damer N, Busch C. Demographic bias in biometrics: a survey on an emerging challenge. IEEE Trans Technol Soc. 2020;1:89–103.
4. Nagpal S, Singh M, Singh R, Vatsa M, Ratha NK. Deep learning for face recognition: pride or prejudiced? 2019. arXiv:1904.01219.
5. Zhao J, Wang T, Yatskar M, Ordonez V, Chang K. Men also like shopping: reducing gender bias amplification using corpus-level constraints. In: Proceedings of conference on empirical methods in natural language processing; Copenhagen, Denmark: Association for Computational Linguistics; 2017. p. 2979–89.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献