Affiliation:
1. Toyota Technological Institute at Chicago
2. Carnegie Mellon University
Abstract
Recent work on adaptive functional programming (AFP) developed techniques for writing programs that can respond to modifications to their data by performing
change propagation
. To achieve this, executions of programs are represented with
dynamic dependence graphs
(DDGs) that record data dependences and control dependences in a way that a change-propagation algorithm can update the computation as if the program were from scratch, by re-executing only the parts of the computation affected by the changes. Since change-propagation only re-executes parts of the computation, it can respond to certain incremental modifications asymptotically faster than recomputing from scratch, potentially offering significant speedups. Such asymptotic speedups, however, are rare: for many computations and modifications, change propagation is no faster than recomputing from scratch.
In this article, we realize a duality between dynamic dependence graphs and memoization, and combine them to give a change-propagation algorithm that can dramatically increase computation reuse. The key idea is to use DDGs to identify and re-execute the parts of the computation that are affected by modifications, while using memoization to identify the parts of the computation that remain unaffected by the changes. We refer to this approach as self-adjusting computation. Since DDGs are imperative, but (traditional) memoization requires purely functional computation, reusing computation correctly via memoization becomes a challenge. We overcome this challenge with a technique for remembering and reusing not just the results of function calls (as in conventional memoization), but their executions represented with DDGs. We show that the proposed approach is realistic by describing a library for self-adjusting computation, presenting efficient algorithms for realizing the library, and describing and evaluating an implementation. Our experimental evaluation with a variety of applications, ranging from simple list primitives to more sophisticated computational geometry algorithms, shows that the approach is effective in practice: compared to recomputing from-scratch; self-adjusting programs respond to small modifications to their data orders of magnitude faster.
Publisher
Association for Computing Machinery (ACM)
Reference80 articles.
1. Analysis and caching of dependencies
2. Acar U. A. 2005. Self-adjusting computation. Ph.D. dissertation Department of Computer Science Carnegie Mellon University. Acar U. A. 2005. Self-adjusting computation. Ph.D. dissertation Department of Computer Science Carnegie Mellon University.
3. Self-adjusting computation
4. Imperative self-adjusting computation
5. Robust Kinetic Convex Hulls in 3D
Cited by
46 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Decomposition-based Synthesis for Applying Divide-and-Conquer-like Algorithmic Paradigms;ACM Transactions on Programming Languages and Systems;2024-06-17
2. Efficient algorithms for dynamic bidirected Dyck-reachability;Proceedings of the ACM on Programming Languages;2022-01-12
3. Efficient Parallel Self-Adjusting Computation;Proceedings of the 33rd ACM Symposium on Parallelism in Algorithms and Architectures;2021-07-06
4. Efficient counter-factual type error debugging;Science of Computer Programming;2020-12
5. Maintaining Triangle Queries under Updates;ACM Transactions on Database Systems;2020-09-25