Estimating the overlap between dependent computations for automatic parallelization
-
Published:2011-07
Issue:4-5
Volume:11
Page:575-591
-
ISSN:1471-0684
-
Container-title:Theory and Practice of Logic Programming
-
language:en
-
Short-container-title:Theory and Practice of Logic Programming
Author:
BONE PAUL,SOMOGYI ZOLTAN,SCHACHTE PETER
Abstract
AbstractResearchers working on the automatic parallelization of programs have long known that too much parallelism can be even worse for performance than too little, because spawning a task to be run on another CPU incurs overheads. Autoparallelizing compilers have therefore long tried to use granularity analysis to ensure that they only spawn off computations whose cost will probably exceed the spawn-off cost by a comfortable margin. However, this is not enough to yield good results, because data dependencies may also limit the usefulness of running computations in parallel. If one computation blocks almost immediately and can resume only after another has completed its work, then the cost of parallelization again exceeds the benefit. We present a set of algorithms for recognizing places in a program where it is worthwhile to execute two or more computations in parallel that pay attention to the second of these issues as well as the first. Our system uses profiling information to compute the times at which a procedure call consumes the values of its input arguments and the times at which it produces the values of its output arguments. Given two calls that may be executed in parallel, our system uses the times of production and consumption of the variables they share to determine how much their executions would overlap if they were run in parallel, and therefore whether executing them in parallel is a good idea or not. We have implemented this technique for Mercury in the form of a tool that uses profiling data to generate recommendations about what to parallelize, for the Mercury compiler to apply on the next compilation of the program. We present preliminary results that show that this technique can yield useful parallelization speedups, while requiring nothing more from the programmer than representative input data for the profiling run.
Publisher
Cambridge University Press (CUP)
Subject
Artificial Intelligence,Computational Theory and Mathematics,Hardware and Architecture,Theoretical Computer Science,Software
Reference14 articles.
1. The execution algorithm of mercury, an efficient purely declarative logic programming language
2. Tannier J. 2007. Parallel Mercury. M.S. thesis, Institut d'informatique, Facultés Universitaires Notre-Dame de la Paix, 21, rue Grandgagnage, B-5000 Namur, Belgium.
3. Feedback directed implicit parallelism
4. Distance: A new metric for controlling granularity for parallel execution;Shen;Journal of Functional and Logic Programming,1999
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献