Author:
Whalley Peter A.,Paulin Daniel,Leimkuhler Benedict
Abstract
AbstractHamiltonian Monte Carlo (HMC) algorithms, which combine numerical approximation of Hamiltonian dynamics on finite intervals with stochastic refreshment and Metropolis correction, are popular sampling schemes, but it is known that they may suffer from slow convergence in the continuous time limit. A recent paper of Bou-Rabee and Sanz-Serna (Ann Appl Prob, 27:2159-2194, 2017) demonstrated that this issue can be addressed by simply randomizing the duration parameter of the Hamiltonian paths. In this article, we use the same idea to enhance the sampling efficiency of a constrained version of HMC, with potential benefits in a variety of application settings. We demonstrate both the conservation of the stationary distribution and the ergodicity of the method. We also compare the performance of various schemes in numerical studies of model problems, including an application to high-dimensional covariance estimation.
Funder
Engineering and Physical Sciences Research Council
Publisher
Springer Science and Business Media LLC
Subject
Computational Theory and Mathematics,Statistics, Probability and Uncertainty,Statistics and Probability,Theoretical Computer Science
Reference70 articles.
1. Andersen, H.C.: Molecular dynamics simulations at constant pressure and/or temperature. J. Chem. Phys. 72(4), 2384–2393 (1980)
2. Andersen, H.C.: Rattle: a “velocity’’ version of the Shake algorithm for molecular dynamics calculations. J. Comput. Phys. 52(1), 24–34 (1983)
3. Barber, D.: Bayesian Reasoning and Machine Learning. Cambridge University Press, Cambridge (2012)
4. Bierkens, J., Fearnhead, P., Roberts, G.: The zig-zag process and super-efficient sampling for Bayesian analysis of big data. Ann. Stat. 47(3), 1288–1320 (2019)
5. Böttcher, B., Schilling, R., Wang, J.: Lévy Matters III. Lecture Notes in Mathematics. Springer, Cham (2013)