Author:
Mamidala Amith R.,Vishnu Abhinav,Panda Dhabaleswar K.
Publisher
Springer Berlin Heidelberg
Reference14 articles.
1. Bernaschi, M., Richelli, G.: Mpi collective communication operations on large shared memory systems. In: Parallel and Distributed Processing, 2001. Proceedings. Ninth Euromicro Workshop (2001)
2. Bruck, J., Ho, C.-T., Kipnis, S., Upfal, E., Weathersby, D.: Efficient Algorithms for All-to-All Communications in Multiport Message-Passing Systems. IEEE Transactions in Parallel and Distributed Systems 8(11), 1143–1156 (1997)
3. Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Parallel Computing 22(6), 789–828 (1996)
4. InfiniBand Trade Association. InfiniBand Architecture Specification, Release 1.1 (October 2004),
http://www.infinibandta.org
5. Lecture Notes in Computer Science;S.P. Kini,2003
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Optimizing MPI Collectives on Shared Memory Multi-Cores;Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis;2023-11-11
2. Accelerating communication with multi‐HCA aware collectives in MPI;Concurrency and Computation: Practice and Experience;2023-08-09
3. Optimizing Mpi Collectives with Hierarchical Design for Efficient Cpu Oversubscription;2023
4. Designing Hierarchical Multi-HCA Aware Allgather in MPI;Workshop Proceedings of the 51st International Conference on Parallel Processing;2022-08-29
5. Analyzing the performance of hierarchical collective algorithms on ARM-based multicore clusters;2022 30th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP);2022-03