Abstract
AbstractMachine learning is part of the daily life of people and companies worldwide. Unfortunately, bias in machine learning algorithms risks unfairly influencing the decision-making process and reiterating possible discrimination. While the interest of the software engineering community in software fairness is rapidly increasing, there is still a lack of understanding of various aspects connected to fair machine learning engineering, i.e., the software engineering process involved in developing fairness-critical machine learning systems. Questions connected to the practitioners’ awareness and maturity about fairness, the skills required to deal with the matter, and the best development phase(s) where fairness should be faced more are just some examples of the knowledge gaps currently open. In this paper, we provide insights into how fairness is perceived and managed in practice, to shed light on the instruments and approaches that practitioners might employ to properly handle fairness. We conducted a survey with 117 professionals who shared their knowledge and experience highlighting the relevance of fairness in practice, and the skills and tools required to handle it. The key results of our study show that fairness is still considered a second-class quality aspect in the development of artificial intelligence systems. The building of specific methods and development environments, other than automated validation tools, might help developers to treat fairness throughout the software lifecycle and revert this trend.
Funder
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung
Ministero dell’Universitá e della Ricerca
Ministero dell’Istruzione, dell’Universitá e della Ricerca
Publisher
Springer Science and Business Media LLC
Reference82 articles.
1. Aggarwal A, Lohia P, Nagar S, Dey K, Saha D (2019) Black box fairness testing of machine learning models. In: Proceedings of the 2019 27th ACM joint meeting on european software engineering conference and symposium on the foundations of software engineering, ESEC/FSE 2019, Association for Computing Machinery, New York, NY, USA, pp 625-635. https://doi.org/10.1145/3338906.3338937
2. Andrews D, Nonnecke B, Preece J (2003) Electronic survey methodology: a case study in reaching hard-to-involve internet users. Int J Human-Comput Interact 16(2):185–210. https://doi.org/10.1207/S15327590IJHC1602_04
3. Angwin J, Larson J (2016) Machine bias - there’s software used across the country to predict future criminals. and it’s biased against blacks. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
4. Avdeyeva OA, Matland RE (2013) An experimental test of mail surveys as a tool for social inquiry in russia. Int J Public Opi Res 25(2):173–194
5. Bantilan N (2018) Themis-ml: A fairness-aware machine learning interface for end-to-end discrimination discovery and mitigation. J Technol Human Serv 36(1):15–30. https://doi.org/10.1080/15228835.2017.1416512
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献