Affiliation:
1. School of Engineering and Information Technology, University of New South Wales at ADFA, Australia
2. School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
Abstract
Federated Learning (FL), as an emerging form of distributed machine learning (ML), can protect participants’ private data from being substantially disclosed to cyber adversaries. It has potential uses in many large-scale, data-rich environments, such as the Internet of Things (IoT), Industrial IoT, Social Media (SM), and the emerging SM 3.0. However, federated learning is susceptible to some forms of data leakage through model inversion attacks. Such attacks occur through the analysis of participants’ uploaded model updates. Model inversion attacks can reveal private data and potentially undermine some critical reasons for employing federated learning paradigms. This article proposes novel differential privacy (DP)-based deep federated learning framework. We theoretically prove that our framework can fulfill DP’s requirements under distinct privacy levels by appropriately adjusting scaled variances of Gaussian noise. We then develop a Differentially Private Data-Level Perturbation (DP-DLP) mechanism to conceal any single data point’s impact on the training phase. Experiments on real-world datasets, specifically the social media 3.0, Iris, and Human Activity Recognition (HAR) datasets, demonstrate that the proposed mechanism can offer high privacy, enhanced utility, and elevated efficiency. Consequently, it simplifies the development of various DP-based FL models with different tradeoff preferences on data utility and privacy levels.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献