Abstract
AbstractScience and Engineering applications are typically associated with expensive optimization problem to identify optimal design solutions and states of the system of interest. Bayesian optimization and active learning compute surrogate models through efficient adaptive sampling schemes to assist and accelerate this search task toward a given optimization goal. Both those methodologies are driven by specific infill/learning criteria which quantify the utility with respect to the set goal of evaluating the objective function for unknown combinations of optimization variables. While the two fields have seen an exponential growth in popularity in the past decades, their dualism and synergy have received relatively little attention to date. This paper discusses and formalizes the synergy between Bayesian optimization and active learning as symbiotic adaptive sampling methodologies driven by common principles. In particular, we demonstrate this unified perspective through the formalization of the analogy between the Bayesian infill criteria and active learning criteria as driving principles of both the goal-driven procedures. To support our original perspective, we propose a general classification of adaptive sampling techniques to highlight similarities and differences between the vast families of adaptive sampling, active learning, and Bayesian optimization. Accordingly, the synergy is demonstrated mapping the Bayesian infill criteria with the active learning criteria, and is formalized for searches informed by both a single information source and multiple levels of fidelity. In addition, we provide guidelines to apply those learning criteria investigating the performance of different Bayesian schemes for a variety of benchmark problems to highlight benefits and limitations over mathematical properties that characterize real-world applications.
Publisher
Springer Science and Business Media LLC
Reference159 articles.
1. Abe N (1998) Query learning strategies using boosting and bagging. In: Proceedings of the 15 international CMF on machine learning (ICML98), pp 1–9
2. Atchadé YF, Rosenthal JS (2005) On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11(5):815–828
3. Atchade Y, Fort G, Moulines E et al (2011) Adaptive Markov chain Monte Carlo: Theory and methods. Bayesian time series models 1
4. Balakrishnan S, Nguyen QP, Low BKH et al (2020) Efficient exploration of reward functions in inverse reinforcement learning via Bayesian optimization. Adv Neural Inf Process Syst 33:4187–4198
5. Balcan MF, Broder A, Zhang T (2007) Margin based active learning. In: International conference on computational learning theory. Springer, Berlin, pp 35–50
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献