Abstract
Abstract
Monte Carlo algorithms simulates some prescribed number of samples, taking some random real time to complete the computations necessary. This work considers the converse: to impose a real-time budget on the computation, which results in the number of samples simulated being random. To complicate matters, the real time taken for each simulation may depend on the sample produced, so that the samples themselves are not independent of their number, and a length bias with respect to compute time is apparent. This is especially problematic when a Markov chain Monte Carlo (MCMC) algorithm is used and the final state of the Markov chain—rather than an average over all states—is required, which is the case in parallel tempering implementations of MCMC. The length bias does not diminish with the compute budget in this case. It also occurs in sequential Monte Carlo (SMC) algorithms, which is the focus of this paper. We propose an anytime framework to address the concern, using a continuous-time Markov jump process to study the progress of the computation in real time. We first show that for any MCMC algorithm, the length bias of the final state’s distribution due to the imposed real-time computing budget can be eliminated by using a multiple chain construction. The utility of this construction is then demonstrated on a large-scale SMC
$ {}^2 $
implementation, using four billion particles distributed across a cluster of 128 graphics processing units on the Amazon EC2 service. The anytime framework imposes a real-time budget on the MCMC move steps within the SMC
$ {}^2 $
algorithm, ensuring that all processors are simultaneously ready for the resampling step, demonstrably reducing idleness to due waiting times and providing substantial control over the total compute budget.
Publisher
Cambridge University Press (CUP)
Subject
General Earth and Planetary Sciences,General Environmental Science
Reference40 articles.
1. Analysis of parallel replicated simulations under a completion time constraint
2. Murray, LM (2011) GPU acceleration of the particle filter: The metropolis resampler. In DMMD: Distributed Machine Learning and Sparse Representation with Massive Data Sets. http://arxiv.org/abs/1202.6163.
3. The Markov renewal theorem and related results;Alsmeyer;Markov Processes and Related Fields,1997
4. Discrete Event Simulations and Parallel Processing: Statistical Properties
5. Resampling algorithms and architectures for distributed particle filters
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献