FLRAM: Robust Aggregation Technique for Defense against Byzantine Poisoning Attacks in Federated Learning
-
Published:2023-10-30
Issue:21
Volume:12
Page:4463
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Chen Haitian123, Chen Xuebin123ORCID, Peng Lulu123, Ma Ruikui123
Affiliation:
1. College of Science, North China University of Science and Technology, Tangshan 063210, China 2. Hebei Key Laboratory of Data Science and Application, Tangshan 063210, China 3. Tangshan Key Laboratory of Data Science, Tangshan 063210, China
Abstract
In response to the susceptibility of federated learning, which is based on a distributed training structure, to byzantine poisoning attacks from malicious clients, resulting in issues such as slowed or disrupted model convergence and reduced model accuracy, we propose a robust aggregation technique for defending against byzantine poisoning attacks in federated learning, known as FLRAM. First, we employ isolation forest and an improved density-based clustering algorithm to detect anomalies in the amplitudes and symbols of client local gradients, effectively filtering out gradients with large magnitude and angular deviation variations. Subsequently, we construct a credibility matrix based on the filtered subset of gradients to evaluate the trustworthiness of each local gradient. Using this credibility score, we further select gradients with higher trustworthiness. Finally, we aggregate the filtered gradients to obtain the global gradient, which is then used to update the global model. The experimental findings show that our proposed approach achieves strong defense performance without compromising FedAvg accuracy. Furthermore, it exhibits superior robustness compared to existing solutions.
Funder
National Natural Science Foundation of China
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference37 articles.
1. McMahan, B., Moore, E., Ramage, D., Hampson, S., and Arcas, B.A.Y. (2017, January 22–27). Communication-efficient learning of deep networks from decentralized data. Proceedings of the Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA. 2. Wu, N., Farokhi, F., Smith, D., and Kaafar, M.A. (2020, January 18–21). The value of collaboration in convex machine learning with differential privacy. Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA. 3. Lee, Y., Park, S., and Kang, J. (2022). Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance. arXiv. 4. Hong, S., Chandrasekaran, V., Kaya, Y., Dumitraş, T., and Papernot, N. (2020). On the effectiveness of mitigating data poisoning attacks with gradient shaping. arXiv. 5. Gosselin, R., Vieu, L., Loukil, F., and Benoit, A. (2022). Privacy and security in federated learning: A survey. Appl. Sci., 12.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|