Author:
Li Cen-Jhih,Huang Pin-Han,Ma Yi-Ting,Hung Hung,Huang Su-Yun
Abstract
Federated learning is a framework for multiple devices or institutions, called local clients, to collaboratively train a global model without sharing their data. For federated learning with a central server, an aggregation algorithm integrates model information sent from local clients to update the parameters for a global model. Sample mean is the simplest and most commonly used aggregation method. However, it is not robust for data with outliers or under the Byzantine problem, where Byzantine clients send malicious messages to interfere with the learning process. Some robust aggregation methods were introduced in literature including marginal median, geometric median and trimmed-mean. In this article, we propose an alternative robust aggregation method, named γ-mean, which is the minimum divergence estimation based on a robust density power divergence. This γ-mean aggregation mitigates the influence of Byzantine clients by assigning fewer weights. This weighting scheme is data-driven and controlled by the γ value. Robustness from the viewpoint of the influence function is discussed and some numerical results are presented.
Subject
General Physics and Astronomy
Reference21 articles.
1. Byzantine-Resilient Secure Federated Learning
2. Federated Learning for Healthcare Informatics
3. Byzantine stochastic gradient descent;Alistarh,2018
4. Distributed training with heterogeneous data: Bridging median- and mean-based algorithms;Chen,2020
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献