bbTopk: Bandwidth-Aware Sparse Allreduce with Blocked Sparsification for Efficient Distributed Training

Author:

Chen Chang1,Li Min2,Yang Chao2

Affiliation:

1. Peking University,Center for Data Science

2. School of Mathematics Sciences, Peking University

Publisher

IEEE

Reference37 articles.

1. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks;hoefler;J Mach Learn Res,2021

2. Efficient mpi-allreduce for large-scale deep learning on gpu-clusters;truong;Concurrency and Computation Practice and Experience,2019

3. Long Short-Term Memory

4. Improving the Performance of Collective Operations in MPICH

5. Gpipe: Efficient training of giant neural networks using pipeline parallelism;huang;Advances in neural information processing systems,2019

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Centauri: Enabling Efficient Scheduling for Communication-Computation Overlap in Large Model Training via Communication Partitioning;Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3;2024-04-27

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3