Survey on Machine Learning Biases and Mitigation Techniques
Author:
Siddique Sunzida1, Haque Mohd Ariful2, George Roy2, Gupta Kishor Datta2ORCID, Gupta Debashis3, Faruk Md Jobair Hossain4ORCID
Affiliation:
1. Department of CSE, Daffodil International University, Dhaka 1215, Bangladesh 2. Department of Computer and Information Science, Clark Atlanta University, Atlanta, GA 30314, USA 3. Computer Science, Wake Forest University, Winston-Salem, NC 27109, USA 4. New York Institute of Technology, Old Westbury, NY 11545, USA
Abstract
Machine learning (ML) has become increasingly prevalent in various domains. However, ML algorithms sometimes give unfair outcomes and discrimination against certain groups. Thereby, bias occurs when our results produce a decision that is systematically incorrect. At various phases of the ML pipeline, such as data collection, pre-processing, model selection, and evaluation, these biases appear. Bias reduction methods for ML have been suggested using a variety of techniques. By changing the data or the model itself, adding more fairness constraints, or both, these methods try to lessen bias. The best technique relies on the particular context and application because each technique has advantages and disadvantages. Therefore, in this paper, we present a comprehensive survey of bias mitigation techniques in machine learning (ML) with a focus on in-depth exploration of methods, including adversarial training. We examine the diverse types of bias that can afflict ML systems, elucidate current research trends, and address future challenges. Our discussion encompasses a detailed analysis of pre-processing, in-processing, and post-processing methods, including their respective pros and cons. Moreover, we go beyond qualitative assessments by quantifying the strategies for bias reduction and providing empirical evidence and performance metrics. This paper serves as an invaluable resource for researchers, practitioners, and policymakers seeking to navigate the intricate landscape of bias in ML, offering both a profound understanding of the issue and actionable insights for responsible and effective bias mitigation.
Reference101 articles.
1. Overcoming the pitfalls and perils of algorithms: A classification of machine learning biases and mitigation methods;Herhausen;J. Bus. Res.,2022 2. Abay, A., Zhou, Y., Baracaldo, N., Rajamoni, S., Chuba, E., and Ludwig, H. (2020). Mitigating bias in federated learning. arXiv. 3. Hort, M., Chen, Z., Zhang, J.M., Sarro, F., and Harman, M. (2022). Bia mitigation for machine learning classifiers: A comprehensive survey. arXiv. 4. Krco, N., Laugel, T., Loubes, J.M., and Detyniecki, M. (2023). When Mitigating Bias is Unfair: A Comprehensive Study on the Impact of Bias Mitigation Algorithms. arXiv. 5. Dietterich, T.G., and Kong, E.B. (1995). Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms, Citeseer. Technical Report.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|