Author:
Zhang Chang,Yang Shunkun,Mao Lingfeng,Ning Huansheng
Abstract
AbstractIn recent years, deep learning methods based on a large amount of data have achieved substantial success in numerous fields. However, with increases in regulations for protecting private user data, access to such data has become restricted. To overcome this limitation, federated learning (FL) has been widely utilized for training deep learning models without centralizing data. However, the inaccessibility of FL data and heterogeneity of the client data render difficulty in providing security and protecting the privacy in FL. In addition, the security and privacy anomalies in the corresponding systems significantly hinder the application of FL. Numerous studies have been proposed aiming to maintain the model security and mitigate the leakage of private training data during the FL training phase. Existing surveys categorize FL attacks from a defensive standpoint, but lack the efficiency of pinpointing attack points and implementing timely defenses. In contrast, our survey comprehensively categorizes and summarizes detected anomalies across client, server, and communication perspectives, facilitating easier identification and timely defense measures. Our survey provides an overview of the FL system and briefly introduces the FL security and privacy anomalies. Next, we detail the existing security and privacy anomalies and the methods of detection and defense from the perspectives of the client, server, and communication process. Finally, we address the security and privacy anomalies in non-independent identically distributed cases during FL and summarize the related research progress. This survey aims to provide a systematic and comprehensive review of security and privacy research in FL to help understand the progress and better apply FL in additional scenarios.
Publisher
Springer Science and Business Media LLC
Reference156 articles.
1. Antunes RS, André da Costa C, Küderle A, Yari IA, Eskofier B (2022) Federated learning for healthcare: systematic review and architecture proposal. ACM Trans Intel Syste Technol (TIST) 13(4):1–23
2. Aono Y, Hayashi T, Wang L, Moriai S (2017) Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans Inf Forensics Secur 13(5):1333–1345
3. Bagdasaryan E, Veit A, Hua Y, Estrin D, Shmatikov V (2020) How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics, 2938–2948. PMLR
4. Barreno M, Nelson B, Joseph AD, Tygar JD (2010) The security of machine learning. Mach Learn 81(2):121–148
5. Bhagoji AN, Chakraborty S, Mittal P, Calo S (2018) Model poisoning attacks in federated learning. In: Proc. Workshop Secur. Mach. Learn.(SecML) 32nd Conf. Neural Inf. Process. Syst.(NeurIPS), 1–23