Differential Private Defense Against Backdoor Attacks in Federated Learning
-
Published:2024-08-28
Issue:2
Volume:9
Page:31-39
-
ISSN:2832-6024
-
Container-title:Frontiers in Computing and Intelligent Systems
-
language:
-
Short-container-title:FCIS
Author:
Miao Lu,Li Weibo,Zhao Jia,Zhou Xin,Wu Yao
Abstract
Federated learning has been applied in a wide variety of applications, in which clients upload their local updates instead of providing their datasets to jointly train a global model. However, the training process of federated learning is vulnerable to adversarial attacks (e.g., backdoor attack) in presence of malicious clients. Previous works showed that differential privacy (DP) can be used to defend against backdoor attacks, at the cost of vastly losing model utility. In this work, we study two kinds of backdoor attacks and propose a method based on differential privacy, called Clip Norm Decay (CND) to defend against them, which maintains utility when defending against backdoor attacks with DP. CND decreases the clipping threshold of model updates through the whole training process to reduce the injected noise. Empirical results show that CND can substantially enhance the accuracy of the main task. In particular, CND bounds the norm of malicious updates by adaptively setting the appropriate thresholds according to the current model updates. Empirical results show that CND can substantially enhance the accuracy of the main task when defending against backdoor attacks. Moreover, extensive experiments demonstrate that our method performs better defense than the original DP, further reducing the attack success rate, even in a strong assumption of threat model. Additional experiments about property inference attack indicate that CND also maintains utility when defending against privacy attacks and does not weaken the privacy preservation of DP.
Publisher
Darcy & Roy Press Co. Ltd.
Reference25 articles.
1. [1] Hard, A., Rao, K., Mathews, R., Beaufays, F., Augenstein, S., Eichner, H., Kiddon, C., Ramage, D.: Federated learning for mobile keyboard prediction. CoRR abs/1811.03604 (2018). 2. [2] Schlesinger, A., O’Hara, K.P., Taylor, A.S.: Let’s talk about race: Identity, chat bots, and AI. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018. 3. [3] Konecný, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: Strategies for improving communication efficiency. CoRR abs/1610.05492 (2016). 4. [4] McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017. 5. [5] Wang, B., Yao, Y., Shan, S., Li, H., Viswanath, B., Zheng, H., Zhao, B.Y.: Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In: 2019 IEEE Symposium on Security and Privacy, SP 2019.
|
|