Trade-Offs Between Synchronization, Communication, and Computation in Parallel Linear Algebra Computations
-
Published:2016-06-28
Issue:1
Volume:3
Page:1-47
-
ISSN:2329-4949
-
Container-title:ACM Transactions on Parallel Computing
-
language:en
-
Short-container-title:ACM Trans. Parallel Comput.
Author:
Solomonik Edgar1,
Carson Erin2,
Knight Nicholas2,
Demmel James2
Affiliation:
1. University of California, Berkeley
2. University of California, Berkeley, Berkeley CA
Abstract
This article derives trade-offs between three basic costs of a parallel algorithm: synchronization, data movement, and computational cost. These trade-offs are lower bounds on the execution time of the algorithm that are independent of the number of processors but dependent on the problem size. Therefore, they provide lower bounds on the execution time of any parallel schedule of an algorithm computed by a system composed of any number of homogeneous processors, each with associated computational, communication, and synchronization costs. We employ a theoretical model that measures the amount of work and data movement as a maximum over that incurred along any execution path during the parallel computation. By considering this metric rather than the total communication volume over the whole machine, we obtain new insights into the characteristics of parallel schedules for algorithms with nontrivial dependency structures. We also present reductions from BSP and LogGP algorithms to our execution model, extending our lower bounds to these two models of parallel computation. We first develop our results for general dependency graphs and hypergraphs based on their expansion properties, and then we apply the theorem to a number of specific algorithms in numerical linear algebra, namely triangular substitution, Cholesky factorization, and stencil computations. We represent some of these algorithms as families of dependency graphs. We derive their communication lower bounds by studying the communication requirements of the hypergraph structures shared by these dependency graphs. In addition to these lower bounds, we introduce a new communication-efficient parallelization for stencil computation algorithms, which is motivated by results of our lower bound analysis and the properties of previously existing parallelizations of the algorithms.
Funder
X-Stack program
DOE computational science graduate fellowship
U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program
DARPA
ETH Zurich postdoctoral fellowship
Publisher
Association for Computing Machinery (ACM)
Subject
Computational Theory and Mathematics,Computer Science Applications,Hardware and Architecture,Modeling and Simulation,Software
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Brief Announcement: Red-Blue Pebbling with Multiple Processors: Time, Communication and Memory Trade-offs;Proceedings of the 36th ACM Symposium on Parallelism in Algorithms and Architectures;2024-06-17
2. CA3DMM: A New Algorithm Based on a Unified View of Parallel Matrix Multiplication;SC22: International Conference for High Performance Computing, Networking, Storage and Analysis;2022-11
3. I/O-Optimal Algorithms for Symmetric Linear Algebra Kernels;Proceedings of the 34th ACM Symposium on Parallelism in Algorithms and Architectures;2022-07-11
4. Timing Analysis in Multi-Core Real Time Systems;2021 IEEE International Symposium on Smart Electronic Systems (iSES);2021-12
5. On the parallel I/O optimality of linear algebra kernels;Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis;2021-11-13