Affiliation:
1. Research Institute for Signals, Systems and Computational Intelligence, sinc(i), FICH-UNL/CONICET, Santa Fe (3000), Argentina
2. Instituto de Matemática Aplicada del Litoral, IMAL-UNL/CONICET, Santa Fe (3000), Argentina
Abstract
Abstract
Machine learning systems influence our daily lives in many different ways. Hence, it is crucial to ensure that the decisions and recommendations made by these systems are fair, equitable, and free of unintended biases. Over the past few years, the field of fairness in machine learning has grown rapidly, investigating how, when, and why these models capture, and even potentiate, biases that are deeply rooted not only in the training data but also in our society. In this Commentary, we discuss challenges and opportunities for rigorous posterior analyses of publicly available data to build fair and equitable machine learning systems, focusing on the importance of training data, model construction, and diversity in the team of developers. The thoughts presented here have grown out of the work we did, which resulted in our winning the annual Research Parasite Award that GigaSciencesponsors.
Funder
Universidad Nacional del Litoral
Publisher
Oxford University Press (OUP)
Subject
Computer Science Applications,Health Informatics