Affiliation:
1. Yale Univ., New Haven, CT
Abstract
We present a framework for parallel programming, based on three conceptual classes for understanding parallelism and three programming paradigms for implementing parallel programs. The conceptual classes are result parallelism, which centers on parallel computation of all elements in a data structure; agenda parallelism, which specifies an agenda of tasks for parallel execution; and specialist parallelism, in which specialist agents solve problems cooperatively. The programming paradigms center on live data structures that transform themselves into result data structures; distributed data structures that are accessible to many processes simultaneously; and message passing, in which all data objects are encapsulated within explicitly communicating processes. There is a rough correspondence between the conceptual classes and the programming methods, as we discuss. We begin by outlining the basic conceptual classes and programming paradigms, and by sketching an example solution under each of the three paradigms. The final section develops a simple example in greater detail, presenting and explaining code and discussing its performance on two commercial parallel computers, an 18-node shared-memory multiprocessor, and a 64-node distributed-memory hypercube. The middle section bridges the gap between the abstract and the practical by giving an overview of how the basic paradigms are implemented.
We focus on the paradigms, not on machine architecture or programming languages: The programming methods we discuss are useful on many kinds of parallel machine, and each can be expressed in several different parallel programming languages. Our programming discussion and the examples use the parallel language C-Linda for several reasons: The main paradigms are all simple to express in Linda; efficient Linda implementations exist on a wide variety of parallel machines; and a wide variety of parallel programs have been written in Linda.
Publisher
Association for Computing Machinery (ACM)
Subject
General Computer Science,Theoretical Computer Science
Reference45 articles.
1. ASHCRAFT C. CARRIERO N. AND GELERNTER D. 1989. Is explicit parallelism natural? Hybrid DB search and sparse LDLT factorization using Linda. Res. Rep. 744 Dept. of Computer Science Yale Univ. New Haven Conn. Jan. ASHCRAFT C. CARRIERO N. AND GELERNTER D. 1989. Is explicit parallelism natural? Hybrid DB search and sparse LDLT factorization using Linda. Res. Rep. 744 Dept. of Computer Science Yale Univ. New Haven Conn. Jan.
2. Implementing remote procedure calls
3. BJORNSON R. CARRIERO N. AND GELERNTER U. 1989. The implementation and performance of hypercube Linda. Res. Rep. 690 Dept. of Computer Science Yale Univ. New Haven Conn. Mar. BJORNSON R. CARRIERO N. AND GELERNTER U. 1989. The implementation and performance of hypercube Linda. Res. Rep. 690 Dept. of Computer Science Yale Univ. New Haven Conn. Mar.
4. BJORNSON U. CARRIERO N. GELERNTER D. AND LEICHTER J. 1988. Linda the portable parallel. Res. Rep. 520 Dept. of Computer Science Yale Univ. New Haven Conn. Jan. BJORNSON U. CARRIERO N. GELERNTER D. AND LEICHTER J. 1988. Linda the portable parallel. Res. Rep. 520 Dept. of Computer Science Yale Univ. New Haven Conn. Jan.
Cited by
121 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Analysis of Parallel Computing Methods and Algorithms;2023 IEEE XVI International Scientific and Technical Conference Actual Problems of Electronic Instrument Engineering (APEIE);2023-11-10
2. An Autonomous Data Language;Theoretical Aspects of Computing – ICTAC 2023;2023
3. User-Defined Tensor Data Analysis;SpringerBriefs in Computer Science;2021
4. Introduction;User-Defined Tensor Data Analysis;2021
5. Developing political-ecological theory: The need for many-task computing;PLOS ONE;2020-11-24