Author:
Zhou Xingchen,Xu Ming,Wu Yiming,Zheng Ning
Abstract
Federated learning is a novel distributed learning framework, which enables thousands of participants to collaboratively construct a deep learning model. In order to protect confidentiality of the training data, the shared information between server and participants are only limited to model parameters. However, this setting is vulnerable to model poisoning attack, since the participants have permission to modify the model parameters. In this paper, we perform systematic investigation for such threats in federated learning and propose a novel optimization-based model poisoning attack. Different from existing methods, we primarily focus on the effectiveness, persistence and stealth of attacks. Numerical experiments demonstrate that the proposed method can not only achieve high attack success rate, but it is also stealthy enough to bypass two existing defense methods.
Funder
Natural Science Foundation of China; Key Research and Development Plan Project of Zhejiang Province
Subject
Computer Networks and Communications
Reference45 articles.
1. Federated Learning: Strategies for Improving Communication Efficiency;Konecný;arXiv,2016
2. Federated Learning for Emoji Prediction in a Mobile Keyboard;Ramaswamy;arXiv,2019
Cited by
79 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献