1. Local SGD converges fast and communicates little;stich;International Conference on Learning Representations,2019
2. Communication-efficient SGD: from local SGD to one-shot averaging;spiridonoff;NeurIPS,2021
3. Local SGD: unified theory and new efficient methods;gorbunov;International Conference on Artificial Intelligence and Statistics,2021
4. Distributed learning, communication complexity and privacy;balcan;Conference on Learning Theory JMLR Workshop and Conference Proceedings,2012
5. Lower bounds for learning distributions under communication constraints via fisher information;barnes;The Journal of Machine Learning Research,2020