Abstract
<div>This paper introduces a benchmarking framework that allows rigorous evaluation of parallel model-based optimizers for expensive functions.</div><div>The framework establishes a relationship between estimated costs of parallel function evaluations (on real-world problems) to known sets of test functions.</div><div>Such real-world problems are not always readily available (e.g., confidentiality, proprietary software).</div><div>Therefore, new test problems are created by Gaussian process simulation. </div><div>The proposed framework is applied in an extensive benchmark study to compare multiple state-of-the-art parallel optimizers with a novel model-based algorithm, which combines ideas of an explorative search for global model quality with parallel local searches to increase function exploitation. </div><div>The benchmarking framework is used to configure good batch size setups for parallel algorithms systematically based on landscape properties.</div><div>Furthermore, we introduce a proof-of-concept for a novel automatic batch size configuration.</div><div>The predictive quality of the batch size configuration is evaluated on a large set of test functions and the functions generated by Gaussian process simulation. </div><div>The introduced algorithm outperforms multiple state-of-the-art optimizers, especially on multi-modal problems.</div><div>Additionally, it proves to be particularly robust over various problem landscapes, and performs well with all tested batch sizes. </div><div>Consequently, this makes it well-suited for black-box kinds of problems. </div>
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Methods/Contributions;Enhancing Surrogate-Based Optimization Through Parallelization;2023