Funder
National Research Foundation of Korea
Reference25 articles.
1. AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
2. Learned Gradient Compression for Distributed Deep Learning
3. Errorcompensatedx: error compensation for variance reduced algorithms;Tang;Advances in Neural Information Processing Systems,2021
4. Natural compression for distributed deep learning;Horvóth,2022
5. Adaptive gradient communication via critical learning regime identification;Agarwal