Abstract
Several conjugate gradient (CG) parameters resulted in promising methods for optimization problems. However, it turns out that some of these parameters, for example, ‘PRP,’ ‘HS,’ and ‘DL,’ do not guarantee sufficient descent of the search direction. In this work, we introduce new spectral-like CG methods that achieve sufficient descent property independently of any line search (LSE) and for arbitrary nonnegative CG parameters. We establish the global convergence of these methods for four different parameters using Wolfe LSE. Our algorithm achieves this without regular restart and assumption of convexity regarding the objective functions. The sequences generated by our algorithm identify points that satisfy the first-order necessary condition for Pareto optimality. We conduct computational experiments to showcase the implementation and effectiveness of the proposed methods. The proposed spectral-like methods, namely nonnegative SPRP, SHZ, SDL, and SHS, exhibit superior performance based on their arrangement, outperforming HZ and SP methods in terms of the number of iterations, function evaluations, and gradient evaluations.
Funder
King Mongkut’s University of Technology North Bangkok
Publisher
Public Library of Science (PLoS)
Reference61 articles.
1. Nonlinear conjugate gradient methods for vector optimization;L. R Lucambio Pérez;SIAM Journal on Optimization,2018
2. Alternative extension of the Hager–Zhang conjugate gradient method for vector optimization;Q Hu;Computational Optimization and Applications,2004
3. Efficient hybrid conjugate gradient techniques for vector optimization;J Yahaya;Results in Control and Optimization,2024
4. Note sur la convergence de méthodes de directions conjuguées;E Polak;Revue française d’informatique et de recherche opérationnelle. Série rouge,1969
5. Methods of conjugate gradients for solving;M. R Hestenes;Journal of Research of The National Bureau of Standards,1952
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献