Fairness-Aware Federated Learning with Real-Time Bias Detection and Correction

Author:

Yadav Vishal,Kale Shreeja

Abstract

Federated Learning (FL) enables collaborative model training across decentralized devices while preserving user data privacy. However, disparities in data distributions among clients can lead to biased models that perform unfairly across different demographic groups. This paper proposes a fairness-aware Federated Learning framework equipped with real-time bias detection and correction mechanisms. Our approach adjusts model updates to address biases detected at local client levels before aggregating them at the central server. We demonstrate the effectiveness of our method through empirical evaluations on multiple datasets, showcasing significant improvements in fairness and model accuracy. Our proposed framework involves a multi-tiered approach to ensure fairness in the model training process. Firstly, it employs local bias detection techniques at the client level to identify disparities in model performance across different groups. Clients then utilize bias correction mechanisms to adjust their model updates, addressing any detected biases before sending updates to the central server. The central server aggregates these bias-corrected updates, ensuring that the global model benefits from equitable learning while maintaining overall performance.

Publisher

International Journal of Innovative Science and Research Technology

Reference65 articles.

1. Bonawitz, K., Eichner, H., Grieskamp, W., et al. (2019). "Towards Federated Learning at Scale: System Design." Proceedings of the 2nd SysML Conference.

2. Hard, A., Rao, K., Mathews, R., Ramaswamy, S. (2018). "Federated Learning for Mobile Keyboard Prediction." arXiv preprint arXiv:1811.03604.

3. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R. (2012). "Fairness through Awareness." Proceedings of the 3rd Innovations in Theoretical Computer Science Conference.

4. Zafar, M. B., Valera, I., Rodriguez, M. G., & Gummadi, K. P. (2017). "Fairness Constraints: Mechanisms for Fair Classification." Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS).

5. Agarwal, A., Dudik, M., & Wu, Z. S. (2018). "Fair Regression: Quantitative Definitions and Reduction-Based Algorithms." Proceedings of the 35th International Conference on Machine Learning (ICML).

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3