1. Gradient sparsification for communication-efficient distributed optimization;wangni;Proc Adv Neural Inf Process Syst,2018
2. ATOMO: Communication-efficient learning via atomic sparsification;wang;Proc Adv Neural Inf Process Syst,2018
3. Fast Federated Learning by Balancing Communication Trade-Offs
4. QSGD: Communication-efficient SGD via gradient quantization and encoding;alistarh;Proc Adv Neural Inf Process Syst,2017
5. Ensemble distillation for robust model fusion in federated learning;lin;Proc Adv Neural Inf Process Syst,2020