Affiliation:
1. Academia Sinica, Taipei, Taiwan, Republic of China
2. New York University, New York, NY
Abstract
To exploit parallelism on shared memory parallel computers (SMPCs), it is natural to focus on decomposing the computation (mainly by distributing the iterations of the nested Do-Loops). In contrast, on distributed memory parallel computers (DMPCs), the decomposition of computation and the distribution of data must both be handled---in order to balance the computation load and to minimize the migration of data. We propose and validate experimentally a method for handling computations and data synergistically to minimize the overall execution time on DMPCs. The method is based on a number of novel techniques, also presented in this article. The core idea is to rank the "importance" of data arrays in a program and specify some of the dominant. The intuition is that the dominant arrays are the ones whose migration would be the most expensive. Using the correspondence between iteration space mapping vectors and distributed dimensions of the dominant data array in each nested Do-loop, allows us to design algorithms for determining data and computation decompositions at the same time. Based on data distribution, computation decomposition for each nested Do-loop is determined based on either the "owner computes" rule or the "owner stores" rule with respect to the dominant data array. If all temporal dependence relations across iteration partitions are regular, we use tiling to allow pipelining and the overlapping of computation and communication. However, in order to use tiling on DMPCs, we needed to extend the existing techniques for determining tiling vectors and tile sizes, as they were originally suited for SMPCs only. The overall method is illustrated on programs for the 2D heat equation, for the Gaussian elimination with pivoting, and for the 2D fast Fourier transform on a linear processor array and on a 2D processor grid.
Publisher
Association for Computing Machinery (ACM)
Reference70 articles.
1. Automatic partitioning of parallel loops and data arrays for distributed shared-memory multiprocessors
2. Automatic translation of FORTRAN programs to vector form
3. Anderson J. 1997. Automatic computation and data decomposition for multiprocessors. Ph.D. dissertation. Dept. of EE and CS Stanford Univ. Stanford Calif.]] Anderson J. 1997. Automatic computation and data decomposition for multiprocessors. Ph.D. dissertation. Dept. of EE and CS Stanford Univ. Stanford Calif.]]
Cited by
17 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Code generation for accurate array redistribution on automatic distributed-memory parallelization;International Journal of Networked and Distributed Computing;2014
2. Data Decomposition for Code Parallelization in Practice: What Do the Experts Need?;2013 IEEE 10th International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing;2013-11
3. Code Generation for Accurate Array Redistribution on Automatic Distributed-Memory Parallelization;2013 14th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing;2013-07
4. An Automatic Computation and Data Decomposition Algorithm of Prioritized Dominant Array;2012 13th International Conference on Parallel and Distributed Computing, Applications and Technologies;2012-12
5. An Improvement to Affine Decomposition on Distributed Memory Architecture;2012 11th International Symposium on Distributed Computing and Applications to Business, Engineering & Science;2012-10