Affiliation:
1. School of Computer Science and Technology, International Research Institute for Artificial Intelligence, Harbin Institute of Technology, Shenzhen, China zhengweijie@hit.edu.cn
2. Laboratoire d'Informatique (LIX), École Polytechnique, CNRS, Institut Polytechnique de Paris, Palaiseau, France doerr@lix.polytechnique.fr
Abstract
Abstract
Multiobjective evolutionary algorithms are successfully applied in many real-world multiobjective optimization problems. As for many other AI methods, the theoretical understanding of these algorithms is lagging far behind their success in practice. In particular, previous theory work considers mostly easy problems that are composed of unimodal objectives.
As a first step towards a deeper understanding of how evolutionary algorithms solve multimodal multiobjective problems, we propose the OneJumpZeroJump problem, a bi-objective problem composed of two objectives isomorphic to the classic jump function benchmark. We prove that the simple evolutionary multiobjective optimizer (SEMO) with probability one does not compute the full Pareto front, regardless of the runtime. In contrast, for all problem sizes n and all jump sizes k∈[4..n2-1], the global SEMO (GSEMO) covers the Pareto front in an expected number of Θ((n-2k)nk) iterations. For k=o(n), we also show the tighter bound 32enk+1±o(nk+1), which might be the first runtime bound for an MOEA that is tight apart from lower-order terms. We also combine the GSEMO with two approaches that showed advantages in single-objective multimodal problems. When using the GSEMO with a heavy-tailed mutation operator, the expected runtime improves by a factor of at least kΩ(k). When adapting the recent stagnation-detection strategy of Rajabi and Witt (2022) to the GSEMO, the expected runtime also improves by a factor of at least kΩ(k) and surpasses the heavy-tailed GSEMO by a small polynomial factor in k. Via an experimental analysis, we show that these asymptotic differences are visible already for small problem sizes: A factor-5 speed-up from heavy-tailed mutation and a factor-10 speed-up from stagnation detection can be observed already for jump size 4 and problem sizes between 10 and 50. Overall, our results show that the ideas recently developed to aid single-objective evolutionary algorithms to cope with local optima can be effectively employed also in multiobjective optimization.
Subject
Computational Mathematics
Reference75 articles.
1. Lazy parameter tuning and control: choosing all parameters randomly from a power-law distribution;Antipov;Proceedings of the Genetic and Evolutionary Computation Conference (GECCO),2021
2. Fast mutation in crossover-based algorithms;Antipov;Algorithmica,2022
3. Runtime analysis of a heavy-tailed (1+(λ,λ)) genetic algorithm on jump functions;Antipov;Proceedings of the International Conference on Parallel Problem Solving from Nature, Part II,2020
4. A rigorous runtime analysis of the (1+(λ,λ)) GA on jump functions;Antipov;Algorithmica,2022
5. Optimal mutation rates in genetic search;Bäck;Proceedings of the International Conference on Genetic Algorithms,1993
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献