Abstract
AbstractIn this paper, we derive a new linear convergence rate for the gradient method with fixed step lengths for non-convex smooth optimization problems satisfying the Polyak-Łojasiewicz (PŁ) inequality. We establish that the PŁ inequality is a necessary and sufficient condition for linear convergence to the optimal value for this class of problems. We list some related classes of functions for which the gradient method may enjoy linear convergence rate. Moreover, we investigate their relationship with the PŁ inequality.
Publisher
Springer Science and Business Media LLC
Subject
Control and Optimization,Business, Management and Accounting (miscellaneous)
Reference24 articles.
1. Abbaszadehpeivasti, H., de Klerk, E., Zamani, M.: The exact worst-case convergence rate of the gradient method with fixed step lengths for L-smooth functions. Opt. Lett., pp. 1–13 (2021)
2. Abbaszadehpeivasti, H., de Klerk, E., Zamani, M.: On the rate of convergence of the difference-of-convex algorithm (DCA). arXiv preprint arXiv:2109.13566 (2021)
3. Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. 116(1), 5–16 (2009)
4. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 35(2), 438–457 (2010)
5. Bu, J., Mesbahi, M.: A note on Nesterov’s accelerated method in nonconvex optimization: a weak estimate sequence approach. arXiv preprint arXiv:2006.08548 (2020)
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献