CVFL: A Chain-like and Verifiable Federated Learning Scheme with Computational Efficiency Based on Lagrange Interpolation Functions
-
Published:2023-11-04
Issue:21
Volume:11
Page:4547
-
ISSN:2227-7390
-
Container-title:Mathematics
-
language:en
-
Short-container-title:Mathematics
Author:
Wang Mengnan1, Cao Chunjie23, Wang Xiangyu4, Zhang Qi5, Jing Zhaoxing23, Li Haochen23, Sun Jingzhang23ORCID
Affiliation:
1. School of Computer Science and Technology, Hainan University, Haikou 570228, China 2. School of Cryptology, Hainan University, Haikou 570228, China 3. School of Cyberspace Security, Hainan University, Haikou 570228, China 4. School of Network and Information Security, Xidian University, Xi’an 710126, China 5. Faculty of Data Science, City University of Macau, Macau SAR, China
Abstract
Data privacy and security concerns have attracted significant attention, leading to the frequent occurrence of data silos in deep learning. To address this issue, federated learning (FL) has emerged. However, simple federated learning frameworks still face two security risks during the training process. Firstly, sharing local gradients instead of private datasets among users does not completely eliminate the possibility of data leakage. Secondly, malicious servers could obtain inaccurate aggregation parameters by forging or simplifying the aggregation process, ultimately leading to model training failures. To address these issues and achieve high-performance training models, we have designed a verifiable federated learning scheme called CVFL, where users exist in a serial manner to resist inference attacks and further protect the privacy of user dataset information through serial encryption. We ensure the secure aggregation of models through a verification protocol based on Lagrange interpolation functions. The serial transmission of local gradients effectively reduces the communication burden on cloud servers, and our verification protocol avoids the computational overhead caused by a large number of encryption and decryption operations without sacrificing model accuracy. Experimental results on the MNIST dataset demonstrate that, after 10 epochs of training with 100 users, our solution achieves a model accuracy of 90.63% for MLP architecture under IID data distribution and 87.47% under non-IID data distribution. For CNN architecture, our solution achieves a model accuracy of 96.72% under IID data distribution and 93.53% under non-IID data distribution. Experimental evaluations corroborate the practical performance of the presented scheme with high accuracy and efficiency.
Funder
Key Research and Development Program of National Natural Science Foundation of China Joint Funds of the National Natural Science Foundation of China
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Reference36 articles.
1. Chen, C., Xiao, J., Liu, J., Zhang, J., Jia, J., and Hu, N. (2023, January 3–7). Unsupervised Intra-Domain Adaptation for Recommendation via Uncertainty Minimization. Proceedings of the 2023 IEEE 39th International Conference on Data Engineering Workshops (ICDEW), Anaheim, CA, USA. 2. Federated Learning for Internet of Things: A Comprehensive Survey;Nguyen;IEEE Commun. Surv. Tutor.,2021 3. Wireless Communications for Collaborative Federated Learning;Chen;IEEE Commun. Mag.,2020 4. Federated Learning for Internet of Things: Recent Advances, Taxonomy, and Open Challenges;Khan;IEEE Commun. Surv. Tutor.,2021 5. Madi, A., Stan, O., Mayoue, A., Grivet-Sébert, A., Gouy-Pailler, C., and Sirdey, R. (2021, January 18–19). A Secure Federated Learning framework using Homomorphic Encryption and Verifiable Computing. Proceedings of the 2021 Reconciling Data Analytics, Automation, Privacy, and Security: A Big Data Challenge (RDAAPS), Hamilton, ON, Canada.
|
|