Affiliation:
1. School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
2. School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu China
3. School of Computer Science, University of Technology Sydney, Sydney Australia
Abstract
Nowadays, with the development of artificial intelligence (AI), privacy issues attract wide attention from society and individuals. It is desirable to make the data available but invisible, i.e., to realize data analysis and calculation without disclosing the data to unauthorized entities. Federated learning (FL) has emerged as a promising privacy-preserving computation method for AI. However, new privacy issues have arisen in FL-based application because various inference attacks can still infer relevant information about the raw data from local models or gradients. This will directly lead to the privacy disclosure. Therefore, it is critical to resist these attacks to achieve complete privacy-preserving computation. In light of the overwhelming variety and a multitude of privacy-preserving computation protocols, we survey these protocols from a series of perspectives to supply better comprehension for researchers and scholars. Concretely, the classification of attacks is discussed including four kinds of inference attacks as well as malicious server and poisoning attack. Besides, this paper systematically captures the state of the art of privacy-preserving computation protocols by analyzing the design rationale, reproducing the experiment of classic schemes, and evaluating all discussed protocols in terms of efficiency and security properties. Finally, this survey identifies a number of interesting future directions.
Publisher
Association for Computing Machinery (ACM)
Reference134 articles.
1. Stuart J Russell. 2010. Artificial intelligence a modern approach. Pearson Education, Inc.
2. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 2017 Artificial intelligence and statistics. 1273–1282. arXiv preprint arXiv:1602.05629 (2016).
3. Jakub Konečnỳ H Brendan McMahan Felix X Yu Peter Richtárik Ananda Theertha Suresh and Dave Bacon. 2017. Federated learning: Strategies for improving communication efficiency. (2017). doi:https://doi.org/10.48550/arXiv.1610.05492.
4. Towards federated learning at scale: System design;Bonawitz Keith;Proceedings of Machine Learning and Systems,2019
5. BFU: Bayesian Federated Unlearning with Parameter Self-Sharing
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献