Affiliation:
1. Department of Computer Science, University of California, Santa Barbara, CA
Abstract
MPI is a message-passing standard widely used for developing high-performance parallel applications. Because of the restriction in the MPI computation model, conventional implementations on shared memory machines map each MPI node to an OS process, which suffers serious performance degradation in the presence of multiprogramming, especially when a space/time sharing policy is employed in OS job scheduling. In this paper, we study compile-time and run-time support for MPI by using threads and demonstrate our optimization techniques for executing a large class of MPI programs written in C. The compile-time transformation adopts thread-specific data structures to eliminate the use of global and static variables in C code. The runtime support includes an efficient point-to-point communication protocol based on a novel lock-free queue management scheme. Our experiments on an SGI Origin 2000 show that our MPI prototype called TMPI using the proposed techniques is competitive with SGI's native MPI implementation in a dedicated environment, and it has significant performance advantages with up to a 23-fold improvement in a multiprogrammed environment.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design,Software
Reference33 articles.
1. Information Power Grid. http://ipg.arc.nasa.gov/. Information Power Grid. http://ipg.arc.nasa.gov/.
2. MPI for NEC Supercomputers. http://www.ccrl-nece.tech nopark.gmd.de/'mpich/. MPI for NEC Supercomputers. http://www.ccrl-nece.tech nopark.gmd.de/'mpich/.
3. MPI Forum. http://www.mpi-forum.org. MPI Forum. http://www.mpi-forum.org.
4. The performance of spin lock alternatives for shared-money multiprocessors
5. Thread scheduling for multiprogrammed multiprocessors
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Runtime Techniques for Automatic Process Virtualization;Workshop Proceedings of the 51st International Conference on Parallel Processing;2022-08-29
2. Improved MPI Multi-Threaded Performance using OFI Scalable Endpoints;2019 IEEE Symposium on High-Performance Interconnects (HOTI);2019-08
3. Process-in-process;Proceedings of the 27th International Symposium on High-Performance Parallel and Distributed Computing;2018-06-11
4. Kernel-Assisted Communication Engine for MPI on Emerging Manycore Processors;2017 IEEE 24th International Conference on High Performance Computing (HiPC);2017-12
5. Eliminating contention bottlenecks in multithreaded MPI;Parallel Computing;2017-11