FLRAM: Robust Aggregation Technique for Defense against Byzantine Poisoning Attacks in Federated Learning

Author:

Chen Haitian123,Chen Xuebin123ORCID,Peng Lulu123,Ma Ruikui123

Affiliation:

1. College of Science, North China University of Science and Technology, Tangshan 063210, China

2. Hebei Key Laboratory of Data Science and Application, Tangshan 063210, China

3. Tangshan Key Laboratory of Data Science, Tangshan 063210, China

Abstract

In response to the susceptibility of federated learning, which is based on a distributed training structure, to byzantine poisoning attacks from malicious clients, resulting in issues such as slowed or disrupted model convergence and reduced model accuracy, we propose a robust aggregation technique for defending against byzantine poisoning attacks in federated learning, known as FLRAM. First, we employ isolation forest and an improved density-based clustering algorithm to detect anomalies in the amplitudes and symbols of client local gradients, effectively filtering out gradients with large magnitude and angular deviation variations. Subsequently, we construct a credibility matrix based on the filtered subset of gradients to evaluate the trustworthiness of each local gradient. Using this credibility score, we further select gradients with higher trustworthiness. Finally, we aggregate the filtered gradients to obtain the global gradient, which is then used to update the global model. The experimental findings show that our proposed approach achieves strong defense performance without compromising FedAvg accuracy. Furthermore, it exhibits superior robustness compared to existing solutions.

Funder

National Natural Science Foundation of China

Publisher

MDPI AG

Subject

Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering

Reference37 articles.

1. McMahan, B., Moore, E., Ramage, D., Hampson, S., and Arcas, B.A.Y. (2017, January 22–27). Communication-efficient learning of deep networks from decentralized data. Proceedings of the Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA.

2. Wu, N., Farokhi, F., Smith, D., and Kaafar, M.A. (2020, January 18–21). The value of collaboration in convex machine learning with differential privacy. Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA.

3. Lee, Y., Park, S., and Kang, J. (2022). Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance. arXiv.

4. Hong, S., Chandrasekaran, V., Kaya, Y., Dumitraş, T., and Papernot, N. (2020). On the effectiveness of mitigating data poisoning attacks with gradient shaping. arXiv.

5. Gosselin, R., Vieu, L., Loukil, F., and Benoit, A. (2022). Privacy and security in federated learning: A survey. Appl. Sci., 12.

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3