Affiliation:
1. The University of Sydney, Camperdown, NSW, Australia
2. The University of Tokyo, Meguro-ku, Tokyo, Japan
Abstract
Adaptive Monte Carlo variance reduction is an effective framework for running a Monte Carlo simulation along with a parameter search algorithm for variance reduction, whereas an initialization step is required for preparing problem parameters in some instances. In spite of the effectiveness of adaptive variance reduction in various fields of application, the length of the preliminary phase has often been left unspecified for the user to determine on a case-by-case basis, much like in typical sequential frameworks. This uncertain element may possibly be even fatal in realistic finite-budget situations, since the pilot run may take most of the budget, or possibly use up all of it. To unnecessitate such an ad hoc initialization step, we develop a batching procedure in adaptive variance reduction, and provide an implementable formula of the learning rate in the parameter search which minimizes an upper bound of the theoretical variance of the empirical batch mean. We analyze decay rates of the minimized upper bound towards the minimal estimator variance with respect to the predetermined computing budget, and provide convergence results as the computing budget increases progressively when the batch size is fixed. Numerical examples are provided to support theoretical findings and illustrate the effectiveness of the proposed batching procedure.
Funder
JSPS Grants-in-Aid for Scientific Research
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Science Applications,Modeling and Simulation
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献