Fast and Accurate Deep Leakage from Gradients Based on Wasserstein Distance

Author:

He Xing12ORCID,Peng Changgen13ORCID,Tan Weijie134ORCID

Affiliation:

1. State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China

2. Guizhou Minzu University, Guiyang 550025, China

3. Guizhou Big Data Academy, Guizhou University, Guiyang 550025, China

4. Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang 550025, China

Abstract

Shared gradients are widely used to protect the private information of training data in distributed machine learning systems. However, Deep Leakage from Gradients (DLG) research has found that private training data can be recovered from shared gradients. The DLG method still has some issues such as the “Exploding Gradient,” low attack success rate, and low fidelity of recovered data. In this study, a Wasserstein DLG method, named WDLG, is proposed; the theoretical analysis shows that under the premise that the output layer of the model has a “bias” term, predicting the “label” of the data by whether the “bias” is “negative” or not is independent of the approximation of the shared gradient, and thus, the label of the data can be recovered with 100% accuracy. In the proposed method, the Wasserstein distance is used to calculate the error loss between the shared gradient and the virtual gradient, which improves model training stability, solves the “Exploding Gradient” phenomenon, and improves the fidelity of the recovered data. Moreover, a large learning rate strategy is designed to improve model training convergence speed in-depth. Finally, the WDLG method is validated on datasets from MNIST, Fashion MNIST, SVHN, CIFAR-100, and LFW. Experiments results show that the proposed WDLG method provides more stable updates for virtual data, a higher attack success rate, faster model convergence, higher image fidelity during recovery, and support for designing large learning rate strategies.

Funder

National Basic Research Program of China

Publisher

Hindawi Limited

Subject

Artificial Intelligence,Human-Computer Interaction,Theoretical Computer Science,Software

Reference57 articles.

1. Communication-efficient learning of deep networks from decentralized data;B. McMahan;Artificial intelligence and statistics PMLR,2017

2. Federated learning: strategies for improving communication efficiency;J. Konečný,2016

3. Federated Learning: Challenges, Methods, and Future Directions

4. Federated optimization in heterogeneous networks;T. Li;Proceedings of Machine Learning and Systems,2020

5. Advances and Open Problems in Federated Learning

Cited by 6 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Privacy Leakage from Logits Attack and its Defense in Federated Distillation;2024 54th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN);2024-06-24

2. Improved Intrusion Detection Based on Hybrid Deep Learning Models and Federated Learning;Sensors;2024-06-20

3. A Novel Federated Learning Framework Based on Conditional Generative Adversarial Networks for Privacy Preserving in 6G;Electronics;2024-02-16

4. Bias Mitigation in Federated Learning for Edge Computing;Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies;2023-12-19

5. Feature Decoupled of Deep Mutual Information Maximization;2023 2nd International Conference on Automation, Robotics and Computer Engineering (ICARCE);2023-12-14

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3