1. Communication-efficient learning of deep networks from decentralized data;McMahan,2017
2. On the convergence of federated optimization in heterogeneous networks;Sahu,2018
3. LAG: Lazily aggregated gradient for communication-efficient distributed learning;Chen,2018
4. C. Xie, S. Koyejo, I. Gupta, Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance, in: International Conference on Machine Learning, 2019, pp. 6893–6901.
5. Distributed federated learning for ultra-reliable low-latency vehicular communications;Samarakoon,2018