Affiliation:
1. Eindhoven University of Technology, The Netherlands
Abstract
Bayesian inference is a probabilistic approach to the problem of drawing conclusions from observed data. Its main challenge is computational, which the Bayesian community tends to address through approximation techniques. However, these techniques come with their own set of challenges, including approximation errors, the difficulty of assessing these errors, and the inherited NP-hardness of the inference problem. Concurrently, in an effort to keep up with Moore’s law, the computer engineering community has developed an increasing number of programming techniques for today’s heterogeneous hardware. These techniques aim to optimize the
execution schedule
, which refers to the order and mapping of computations on the available execution units. In this work, we advocate for the utilization of these techniques to avoid a common pitfall known as
premature approximation
. Notably, these techniques have the potential to significantly enhance performance, thereby reducing the need for approximation and thus mitigating the challenges that accompany it. We first demonstrate how optimization of the storage strategy, i.e. when and where intermediate results are stored, allows for a trade-off between
runtime
and
peak memory usage
. We then investigate various techniques that aim to automatically generate efficient execution schedules. Finally, we focus on a specific, runtime-efficient execution schedule identified through design space exploration and compare its performance with that of two established solvers for probabilistic inference. The results show that our optimized implementation achieves speedups ranging from
\(2\times\)
to
\(316\times\)
for the UAI 2014 Promedus benchmark problems compared to the reference solvers. The ideas and methods presented in our study are examined within the framework of exact inference for discrete random variables. However, they are effectively applicable to scenarios involving continuous variables.
Publisher
Association for Computing Machinery (ACM)
Reference59 articles.
1. Learning to optimize halide with tree search and random programs
2. Lasse F. Wolff Anthony, Benjamin Kanding, and Raghavendra Selvan. 2020. Carbontracker: Tracking and Predicting the Carbon Footprint of Training Deep Learning Models. arXiv:2007.03051 [cs, eess, stat] (July 2020). http://arxiv.org/abs/2007.03051 arXiv: 2007.03051.
3. Complexity of Finding Embeddings in a k-Tree
4. Eric Benhamou. 2018. Kalman Filter Demystified: From Intuition to Probabilistic Graphical Model to Real Case in Financial Markets. arXiv:1811.11618 [cs, q-fin, stat] (Dec. 2018). http://arxiv.org/abs/1811.11618 arXiv: 1811.11618.
5. Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg.