1. Achieving linear speedup with partial worker participation in non-IID federated learning;yang;International Conference on Learning Representations,2021
2. Hybrid local SGD for federated learning with heterogeneous communications,0
3. Stochastic gradient push for distributed deep learning;assran;36th Int Conf Mach Learning,2019
4. MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
5. A unified theory of decentralized SGD with changing topology and local updates;koloskova;37th Int Conf Mach Learn,2020