1. M. Andrychowicz, M. Denil, S. Gomez, M.W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, N. De Freitas, Learning to learn by gradient descent by gradient descent. In: NIPS’16: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 3988–3996 (2016)
2. J. de Armas, E. Lalla-Ruiz, S.L. Tilahun, S. Voß, Similarity in metaheuristics: a gentle step towards a comparison methodology. Nat. Compu. (2021). https://doi.org/10.1007/s11047-020-09837-9
3. A. Auger, N. Hansen, A restart CMA evolution strategy with increasing population size. In: Proceedings of the IEEE Congress on Evolutionary Computation, IEEE CEC ’05, vol. 2, pp. 1769–1776. IEEE (2005)
4. C. Blum, G. Ochoa, A comparative analysis of two matheuristics by means of merged local optima networks. Eur. J. Oper. Res. 290(1), 36–56 (2021)
5. C. Blum, G.R. Raidl, Hybrid Metaheuristics: Powerful Tools for Optimization (Springer, Berlin, 2016)