Abstract
AbstractAdaptive importance samplers are adaptive Monte Carlo algorithms to estimate expectations with respect to some target distribution which adapt themselves to obtain better estimators over a sequence of iterations. Although it is straightforward to show that they have the same $$\mathcal {O}(1/\sqrt{N})$$
O
(
1
/
N
)
convergence rate as standard importance samplers, where N is the number of Monte Carlo samples, the behaviour of adaptive importance samplers over the number of iterations has been left relatively unexplored. In this work, we investigate an adaptation strategy based on convex optimisation which leads to a class of adaptive importance samplers termed optimised adaptive importance samplers (OAIS). These samplers rely on the iterative minimisation of the $$\chi ^2$$
χ
2
-divergence between an exponential family proposal and the target. The analysed algorithms are closely related to the class of adaptive importance samplers which minimise the variance of the weight function. We first prove non-asymptotic error bounds for the mean squared errors (MSEs) of these algorithms, which explicitly depend on the number of iterations and the number of samples together. The non-asymptotic bounds derived in this paper imply that when the target belongs to the exponential family, the $$L_2$$
L
2
errors of the optimised samplers converge to the optimal rate of $$\mathcal {O}(1/\sqrt{N})$$
O
(
1
/
N
)
and the rate of convergence in the number of iterations are explicitly provided. When the target does not belong to the exponential family, the rate of convergence is the same but the asymptotic $$L_2$$
L
2
error increases by a factor $$\sqrt{\rho ^\star } > 1$$
ρ
⋆
>
1
, where $$\rho ^\star - 1$$
ρ
⋆
-
1
is the minimum $$\chi ^2$$
χ
2
-divergence between the target and an exponential family proposal.
Funder
Engineering and Physical Sciences Research Council
Agencia Estatal de Investigación
Office of Naval Research Global
Publisher
Springer Science and Business Media LLC
Subject
Computational Theory and Mathematics,Statistics, Probability and Uncertainty,Statistics and Probability,Theoretical Computer Science
Reference32 articles.
1. Agapiou, S., Papaspiliopoulos, O., Sanz-Alonso, D., Stuart, A.: Importance sampling: intrinsic dimension and computational cost. Stat. Sci. 32(3), 405–431 (2017)
2. Akyildiz, ÖD., Sabanis, S.: Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo under local conditions for nonconvex optimization. (2020). arXiv preprint arXiv:2002.05465
3. Arouna, B.: Adaptative monte carlo method, a variance reduction technique. Monte Carlo Methods Appl. 10(1), 1–24 (2004a)
4. Arouna, B.: Robbins-Monro algorithms and variance reduction in finance. J. Comput. Finance 7(2), 35–62 (2004b)
5. Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. (2016). arXiv:1606.04838
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献