Affiliation:
1. The Hong Kong Polytechnic University
2. JD Explore Academy, JD.com
Abstract
Recent studies have shown that the training samples can be recovered from gradients, which are called Gradient Inversion (GradInv) attacks. However, there remains a lack of extensive surveys covering recent advances and thorough analysis of this issue. In this paper, we present a comprehensive survey on GradInv, aiming to summarize the cutting-edge research and broaden the horizons for different domains. Firstly, we propose a taxonomy of GradInv attacks by characterizing existing attacks into two paradigms: iteration- and recursion-based attacks. In particular, we dig out some critical ingredients from the iteration-based attacks, including data initialization, model training and gradient matching. Second, we summarize emerging defense strategies against GradInv attacks. We find these approaches focus on three perspectives covering data obscuration, model improvement and gradient protection. Finally, we discuss some promising directions and open problems for further research.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Dataset Distillation: A Comprehensive Review;IEEE Transactions on Pattern Analysis and Machine Intelligence;2024-01
2. Privacy-Preserving Cross-Silo Federated Learning Atop Blockchain for IoT;IEEE Internet of Things Journal;2023-12-15
3. The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning;2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR);2023-06