Accelerating distributed deep neural network training with pipelined MPI allreduce

Author:

Castelló AdriánORCID,Quintana-Ortí Enrique S.,Duato José

Abstract

AbstractTensorFlow (TF) is usually combined with the Horovod (HVD) workload distribution package to obtain a parallel tool to train deep neural network on clusters of computers. HVD in turn utilizes a blocking Allreduce primitive to share information among processes, combined with a communication thread to overlap communication with computation. In this work, we perform a thorough experimental analysis to expose (1) the importance of selecting the best algorithm in MPI libraries to realize the Allreduce operation; and (2) the performance acceleration that can be attained when replacing a blocking Allreduce with its non-blocking counterpart (while maintaining the blocking behaviour via the appropriate synchronization mechanism). Furthermore, (3) we explore the benefits of applying pipelining to the communication exchange, demonstrating that these improvements carry over to distributed training via TF+HVD. Finally, (4) we show that pipelining can also boost performance for applications that make heavy use of other collectives, such as Broadcast and Reduce-Scatter.

Funder

Ministerio de Ciencia, Innovación y Universidades

Agencia Valenciana de la Innovación

PRACE preparatory access

Publisher

Springer Science and Business Media LLC

Subject

Computer Networks and Communications,Software

Cited by 9 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Accelerating MPI AllReduce Communication with Efficient GPU-Based Compression Schemes on Modern GPU Clusters;ISC High Performance 2024 Research Paper Proceedings (39th International Conference);2024-05

2. SUARA: A scalable universal allreduce communication algorithm for acceleration of parallel deep learning applications;Journal of Parallel and Distributed Computing;2024-01

3. Uniform Algorithms for Reduce-scatter and (most) other Collectives for MPI;2023 IEEE International Conference on Cluster Computing (CLUSTER);2023-10-31

4. Interactive visual analytics of parallel training strategies for DNN models;Computers & Graphics;2023-10

5. Accelerating communication with multi‐HCA aware collectives in MPI;Concurrency and Computation: Practice and Experience;2023-08-09

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3