Ensuring Fairness and Gradient Privacy in Personalized Heterogeneous Federated Learning

Author:

Lewis Cody1ORCID,Varadharajan Vijay1ORCID,Noman Nasimul1ORCID,Tupakula Uday2ORCID

Affiliation:

1. University of Newcastle, Callaghan, Australia

2. University of Newcastle, Callaghan, Australia and University of New England, Armidale, Australia

Abstract

With the increasing tension between conflicting requirements of the availability of large amounts of data for effective machine learning-based analysis, and for ensuring their privacy, the paradigm of federated learning has emerged, a distributed machine learning setting where the clients provide only the machine learning model updates to the server rather than the actual data for decision making. However, the distributed nature of federated learning raises specific challenges related to fairness in a heterogeneous setting. This motivates the focus of our article, on the heterogeneity of client devices having different computational capabilities and their impact on fairness in federated learning. Furthermore, our aim is to achieve fairness in heterogeneity while ensuring privacy. As far as we are aware there are no existing works that address all three aspects of fairness, device heterogeneity, and privacy simultaneously in federated learning. In this article, we propose a novel federated learning algorithm with personalization in the context of heterogeneous devices while maintaining compatibility with the gradient privacy preservation techniques of secure aggregation. We analyze the proposed federated learning algorithm under different environments with different datasets and show that it achieves performance close to or greater than the state-of-the-art in heterogeneous device personalized federated learning. We also provide theoretical proofs for the fairness and convergence properties of our proposed algorithm.

Funder

Australian Government Research Training Program (RTP) Scholarship

Publisher

Association for Computing Machinery (ACM)

Reference56 articles.

1. Deep Learning with Differential Privacy

2. Stefan Arnold and Dilara Yesilbas. 2021. Demystifying the effects of non-independence in federated learning. Retrieved from https://arxiv.org/abs/2103.11226

3. CONTRA: Defending Against Poisoning Attacks in Federated Learning

4. Irwan Bello, William Fedus, Xianzhi Du, Ekin Dogus Cubuk, Aravind Srinivas, Tsung-Yi Lin, Jonathon Shlens, and Barret Zoph. 2021. Revisiting ResNets: Improved training and scaling strategies. In Advances in Neural Information Processing Systems. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (Eds.), Vol. 34. Curran Associates, Inc., 22614–22627. Retrieved from https://proceedings.neurips.cc/paper_files/paper/2021/file/bef4d169d8bddd17d68303877a3ea945-Paper.pdf

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3