Author:
Yang Fei,Zhang Xu,Guo Shangwei,Chen Daiyuan,Gan Yan,Xiang Tao,Liu Yang
Abstract
AbstractIncreasing numbers of artificial intelligence systems are employing collaborative machine learning techniques, such as federated learning, to build a shared powerful deep model among participants, while keeping their training data locally. However, concerns about integrity and privacy in such systems have significantly hindered the use of collaborative learning systems. Therefore, numerous efforts have been presented to preserve the model’s integrity and reduce the privacy leakage of training data throughout the training phase of various collaborative learning systems. This survey seeks to provide a systematic and comprehensive evaluation of security and privacy studies in collaborative training, in contrast to prior surveys that only focus on one single collaborative learning system. Our survey begins with an overview of collaborative learning systems from various perspectives. Then, we systematically summarize the integrity and privacy risks of collaborative learning systems. In particular, we describe state-of-the-art integrity attacks (e.g., Byzantine, backdoor, and adversarial attacks) and privacy attacks (e.g., membership, property, and sample inference attacks), as well as the associated countermeasures. We additionally provide an analysis of open problems to motivate possible future studies.
Funder
Key Research Project of Zhejiang Lab
China Postdoctoral Science Foundation
Key R\&D Program of Zhejiang
National Natural Science Foundation of China
CCF- AFSG Research Fund
Publisher
Springer Science and Business Media LLC