User-Level Label Leakage from Gradients in Federated Learning

Author:

Wainakh Aidmar1,Ventola Fabrizio2,Müßig Till3,Keim Jens3,Cordero Carlos Garcia1,Zimmer Ephraim1,Grube Tim1,Kersting Kristian2,Mühlhäuser Max1

Affiliation:

1. Telecooperation Lab , Technical University of Darmstadt

2. Artificial Intelligence and Machine Learning Lab , Technical University of Darmstadt

3. Technical University of Darmstadt

Abstract

Abstract Federated learning enables multiple users to build a joint model by sharing their model updates (gradients), while their raw data remains local on their devices. In contrast to the common belief that this provides privacy benefits, we here add to the very recent results on privacy risks when sharing gradients. Specifically, we investigate Label Leakage from Gradients (LLG), a novel attack to extract the labels of the users’ training data from their shared gradients. The attack exploits the direction and magnitude of gradients to determine the presence or absence of any label. LLG is simple yet effective, capable of leaking potential sensitive information represented by labels, and scales well to arbitrary batch sizes and multiple classes. We mathematically and empirically demonstrate the validity of the attack under different settings. Moreover, empirical results show that LLG successfully extracts labels with high accuracy at the early stages of model training. We also discuss different defense mechanisms against such leakage. Our findings suggest that gradient compression is a practical technique to mitigate the attack.

Publisher

Privacy Enhancing Technologies Symposium Advisory Board

Subject

General Medicine

Cited by 25 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Client-Side Gradient Inversion Attack in Federated Learning Using Secure Aggregation;IEEE Internet of Things Journal;2024-09-01

2. FedDADP: A Privacy-Risk-Adaptive Differential Privacy Protection Method for Federated Android Malware Classifier;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30

3. Breaking Secure Aggregation: Label Leakage from Aggregated Gradients in Federated Learning;IEEE INFOCOM 2024 - IEEE Conference on Computer Communications;2024-05-20

4. Federated Learning for Radar Gesture Recognition Based on Spike Timing-Dependent Plasticity;IEEE Transactions on Aerospace and Electronic Systems;2024-04

5. Maximum Knowledge Orthogonality Reconstruction with Gradients in Federated Learning;2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV);2024-01-03

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3