Abstract
AbstractWe perform rigorous runtime analyses for the univariate marginal distribution algorithm (UMDA) and the population-based incremental learning (PBIL) Algorithm on LeadingOnes. For the UMDA, the currently known expected runtime on the function is $${\mathcal {O}}\left( n\lambda \log \lambda +n^2\right)$$
O
n
λ
log
λ
+
n
2
under an offspring population size $$\lambda =\Omega (\log n)$$
λ
=
Ω
(
log
n
)
and a parent population size $$\mu \le \lambda /(e(1+\delta ))$$
μ
≤
λ
/
(
e
(
1
+
δ
)
)
for any constant $$\delta >0$$
δ
>
0
(Dang and Lehre, GECCO 2015). There is no lower bound on the expected runtime under the same parameter settings. It also remains unknown whether the algorithm can still optimise the LeadingOnes function within a polynomial runtime when $$\mu \ge \lambda /(e(1+\delta ))$$
μ
≥
λ
/
(
e
(
1
+
δ
)
)
. In case of the PBIL, an expected runtime of $${\mathcal {O}}(n^{2+c})$$
O
(
n
2
+
c
)
holds for some constant $$c \in (0,1)$$
c
∈
(
0
,
1
)
(Wu, Kolonko and Möhring, IEEE TEVC 2017). Despite being a generalisation of the UMDA, this upper bound is significantly asymptotically looser than the upper bound of $${\mathcal {O}}\left( n^2\right)$$
O
n
2
of the UMDA for $$\lambda =\Omega (\log n)\cap {\mathcal {O}}\left( n/\log n\right)$$
λ
=
Ω
(
log
n
)
∩
O
n
/
log
n
. Furthermore, the required population size is very large, i.e., $$\lambda =\Omega (n^{1+c})$$
λ
=
Ω
(
n
1
+
c
)
. Our contributions are then threefold: (1) we show that the UMDA with $$\mu =\Omega (\log n)$$
μ
=
Ω
(
log
n
)
and $$\lambda \le \mu e^{1-\varepsilon }/(1+\delta )$$
λ
≤
μ
e
1
-
ε
/
(
1
+
δ
)
for any constants $$\varepsilon \in (0,1)$$
ε
∈
(
0
,
1
)
and $$0<\delta \le e^{1-\varepsilon }-1$$
0
<
δ
≤
e
1
-
ε
-
1
requires an expected runtime of $$e^{\Omega (\mu )}$$
e
Ω
(
μ
)
on LeadingOnes, (2) an upper bound of $${\mathcal {O}}\left( n\lambda \log \lambda +n^2\right)$$
O
n
λ
log
λ
+
n
2
is shown for the PBIL, which improves the current bound $${\mathcal {O}}\left( n^{2+c}\right)$$
O
n
2
+
c
by a significant factor of $$\Theta (n^{c})$$
Θ
(
n
c
)
, and (3) we for the first time consider the two algorithms on the LeadingOnes function in a noisy environment and obtain an expected runtime of $${\mathcal {O}}\left( n^2\right)$$
O
n
2
for appropriate parameter settings. Our results emphasise that despite the independence assumption in the probabilistic models, the UMDA and the PBIL with fine-tuned parameter choices can still cope very well with variable interactions.
Publisher
Springer Science and Business Media LLC
Subject
Applied Mathematics,Computer Science Applications,General Computer Science
Reference47 articles.
1. Baluja, S.: Population-based incremental learning: a method for integrating genetic search based function optimization and competitive learning. Technical report, Carnegie Mellon University (1994)
2. Bosman, P.A.N., Thierens, D.: The balance between proximity and diversity in multiobjective evolutionary algorithms. IEEE Trans. Evol. Comput. 7(2), 174–188 (2003)
3. Corus, D., Dang, D.C., Eremeev, A.V., Lehre, P.K.: Level-based analysis of genetic algorithms and other search processes. IEEE Trans. Evol. Comput. 22(5), 707–719 (2018)
4. Dang, D.C., Lehre, P.K.: Efficient optimisation of noisy fitness functions with population-based evolutionary algorithms. In: Proceedings of the Conference on Foundations of Genetic Algorithms, FOGA ’15, pp. 62–68 (2015)
5. Dang, D.C., Lehre, P.K.: Simplified runtime analysis of estimation of distribution algorithms. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’15, pp. 513–518 (2015)
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献