Affiliation:
1. CACR, CALIFORNIA INSTITUTE OF TECHNOLOGY, PASADENA, CALIFORNIA 91125
2. JET PROPULSION LABORATORY, CALIFORNIA INSTITUTE OF TECHNOLOGY, PASADENA,
CALIFORNIA 91109 (), INSTITUTE OF SCIENTIFIC
COMPUTING, UNIVERSITY OF VIENNA, AUSTRIA
Abstract
A key characteristic of today's high performance computing systems is a physically distributed memory, which makes the efficient management of locality essential for taking advantage of the performance enhancements offered by these architectures. Currently, the standard technique for programming such systems involves the extension of traditional sequential programming languages with explicit message-passing libraries, in a processor-centric model for programming and execution. It is commonly understood that this programming paradigm results in complex, brittle, and error-prone programs, because of the way in which algorithms and communication are inextricably interwoven. This paper describes a new approach to locality awareness, which focuses on data distributions in high-productivity languages. Data distributions provide an abstract specification of the partitioning of large-scale data collections across memory units, supporting coarse-grain parallel computation and locality of access at a high level of abstraction. Our design, which is based on a new programming language called Chapel, is motivated by the need to provide a high-productivity paradigm for the development of efficient and reusable parallel code. We present an object-oriented framework that allows the explicit specification of the mapping of elements in a collection to memory units, the control of the arrangement of elements within such units, the definition of sequential and parallel iteration over collections, and the formulation of specialized allocation policies as required for advanced applications. The result is a concise high-productivity programming model that separates algorithms from data representation and enables reuse of distributions, allocation policies, and data structures.
Subject
Hardware and Architecture,Theoretical Computer Science,Software
Reference48 articles.
1. Andre, F.
,
Pazat, J.L.
, and
Thomas, H.
(1990). Pandore: A system to manage data distribution, in
International Conference on Supercomputing, pp.
380—388,
Amsterdam
, The Netherlands.
2. Benkner, S.
and
Zima, H.P.
(1999). High Performance Fortran for distributed-memory
architectures, in
D. Trystram
, ed. Parallel Computing 25, Special Anniversary Issue, pp. 1785—1825.
3. Distributed pC++ Basic Ideas for an Object Parallel Language
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. CommAnalyzer;Proceedings of the 27th International Symposium on High-Performance Parallel and Distributed Computing;2018-06-11
2. Comparative Performance and Optimization of Chapel in Modern Manycore Architectures;2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW);2017-05
3. SIMD parallel MCMC sampling with applications for big-data Bayesian analytics;Computational Statistics & Data Analysis;2015-08
4. International Conference on Computational Science, ICCS 2012 A Theory of Data Movement in Parallel Computations;Procedia Computer Science;2012
5. The rise and fall of high performance Fortran;Communications of the ACM;2011-11