Affiliation:
1. School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 e-mail:
Abstract
Design optimization under uncertainty is notoriously difficult when the objective function is expensive to evaluate. State-of-the-art techniques, e.g., stochastic optimization or sampling average approximation, fail to learn exploitable patterns from collected data and require a lot of objective function evaluations. There is a need for techniques that alleviate the high cost of information acquisition and select sequential simulations optimally. In the field of deterministic single-objective unconstrained global optimization, the Bayesian global optimization (BGO) approach has been relatively successful in addressing the information acquisition problem. BGO builds a probabilistic surrogate of the expensive objective function and uses it to define an information acquisition function (IAF) that quantifies the merit of making new objective evaluations. In this work, we reformulate the expected improvement (EI) IAF to filter out parametric and measurement uncertainties. We bypass the curse of dimensionality, since the method does not require learning the response surface as a function of the stochastic parameters, and we employ a fully Bayesian interpretation of Gaussian processes (GPs) by constructing a particle approximation of the posterior of its hyperparameters using adaptive Markov chain Monte Carlo (MCMC) to increase the methods robustness. Also, our approach quantifies the epistemic uncertainty on the location of the optimum and the optimal value as induced by the limited number of objective evaluations used in obtaining it. We verify and validate our approach by solving two synthetic optimization problems under uncertainty and demonstrate it by solving the oil-well placement problem (OWPP) with uncertainties in the permeability field and the oil price time series.
Subject
Computer Graphics and Computer-Aided Design,Computer Science Applications,Mechanical Engineering,Mechanics of Materials
Reference44 articles.
1. Bottou, L., 2010, “Large-Scale Machine Learning With Stochastic Gradient Descent,” 19th International Conference on Computational Statistics, COMPSTAT’2010, Paris, France, Aug. 22–27, Springer, Berlin, pp. 177–186.
2. The Sample Average Approximation Method for Stochastic Discrete Optimization;SIAM J. Optim.,2002
3. Zinkevich, M., Weimer, M., Li, L., and Smola, A. J., 2010, “Parallelized Stochastic Gradient Descent,” Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, Dec. 6–9, pp. 2595–2603.
Cited by
15 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献