A Global Stochastic Optimization Particle Filter Algorithm
Affiliation:
1. School of Mathematics, University of Bristol, Woodland Road, Bristol BS8 1UG, U.K
2. Dpartement CITI, Telecom SudParis, 9 rue Charles Fourier, 91008 Evry, France
Abstract
Summary
We introduce a new online algorithm for expected loglikelihood maximization in situations where the objective function is multi-modal and/or has saddle points. The key element underpinning the algorithm is a probability distribution which (a) is shown to concentrate on the target parameter value as the sample size increases and (b) can be efficiently estimated by means of a standard particle filter algorithm. This distribution depends on a learning rate, where the faster the learning rate the quicker it concentrates on the desired element of the search space, but the less likely the algorithm is to escape from a local optimum of the objective function. In order to achieve a fast convergence rate with a slow learning rate, our algorithm exploits the acceleration property of averaging, well-known in the stochastic gradient literature. Considering several challenging estimation problems, the numerical experiments show that, with high probability, the algorithm successfully finds the highest mode of the objective function and converges to its global maximizer at the optimal rate. While the focus of this work is expected loglikelihood maximization, the proposed methodology and its theory apply more generally for optimizing a function defined through an expectation.
Publisher
Oxford University Press (OUP)
Subject
Applied Mathematics,Statistics, Probability and Uncertainty,General Agricultural and Biological Sciences,Agricultural and Biological Sciences (miscellaneous),General Mathematics,Statistics and Probability
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献