A Communication-Efficient, Privacy-Preserving Federated Learning Algorithm Based on Two-Stage Gradient Pruning and Differentiated Differential Privacy
Author:
Li Yong123ORCID, Du Wei1ORCID, Han Liquan1, Zhang Zhenjian1, Liu Tongtong1
Affiliation:
1. School of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China 2. AI Research Institute, Changchun University of Technology, Changchun 130012, China 3. School of Computer Science and Technology, Jilin University, Changchun 130012, China
Abstract
There are several unsolved problems in federated learning, such as the security concerns and communication costs associated with it. Differential privacy (DP) offers effective privacy protection by introducing noise to parameters based on rigorous privacy definitions. However, excessive noise addition can potentially compromise the accuracy of the model. Another challenge in federated learning is the issue of high communication costs. Training large-scale federated models can be slow and expensive in terms of communication resources. To address this, various model pruning algorithms have been proposed. To address these challenges, this paper introduces a communication-efficient, privacy-preserving FL algorithm based on two-stage gradient pruning and differentiated differential privacy, named IsmDP-FL. The algorithm leverages a two-stage approach, incorporating gradient pruning and differentiated differential privacy. In the first stage, the trained model is subject to gradient pruning, followed by the addition of differential privacy to the important parameters selected after pruning. Non-important parameters are pruned by a certain ratio, and differentiated differential privacy is applied to the remaining parameters in each network layer. In the second stage, gradient pruning is performed during the upload to the server for aggregation, and the final result is returned to the client to complete the federated learning process. Extensive experiments demonstrate that the proposed method ensures a high communication efficiency, maintains the model privacy, and reduces the unnecessary use of the privacy budget.
Funder
Science and Technology Research Planning Project of Jilin Provincial Department of Education in China
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference37 articles.
1. Lee, I. (2020). Internet of Things (IoT) cybersecurity: Literature review and IoT cyber risk management. Future Internet, 12. 2. Pajooh, H.H., Demidenko, S., Aslam, S., and Harris, M. (2022). Blockchain and 6G-Enabled IoT. Inventions, 7. 3. Khan, Z.A., and Namin, A.S. (2022). A Survey of DDOS Attack Detection Techniques for IoT Systems Using BlockChain Technology. Electronics, 11. 4. Hu, R., Gong, Y., and Guo, Y. (2022). Federated learning with sparsified model perturbation: Improving accuracy under client-level differential privacy. arXiv. 5. Jiang, Y., Wang, S., Valls, V., Ko, B.J., Lee, W.H., Leung, K.K., and Tassiulas, L. (2022). Model pruning enables efficient federated learning on edge devices. IEEE Trans. Neural Netw. Learn. Syst., Early Access.
|
|