Author:
Uschmajew André,Vandereycken Bart
Abstract
AbstractBased on a result by Taylor et al. (J Optim Theory Appl 178(2):455–476, 2018) on the attainable convergence rate of gradient descent for smooth and strongly convex functions in terms of function values, an elementary convergence analysis for general descent methods with fixed step sizes is presented. It covers general variable metric methods, gradient-related search directions under angle and scaling conditions, as well as inexact gradient methods. In all cases, optimal rates are obtained.
Funder
Max Planck Institute for Mathematics in the Sciences
Publisher
Springer Science and Business Media LLC
Subject
Applied Mathematics,Management Science and Operations Research,Control and Optimization
Reference10 articles.
1. Cohen, A.I.: Stepsize analysis for descent methods. J. Optim. Theory Appl. 33(2), 187–205 (1981)
2. de Klerk, E., Glineur, F., Taylor, A.B.: On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions. Optim. Lett. 11(7), 1185–1199 (2017)
3. de Klerk, E., Glineur, F., Taylor, A.B.: Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation. SIAM J. Optim. 30(3), 2053–2082 (2020)
4. Gannot, O.: A frequency-domain analysis of inexact gradient methods. Math. Program. (2021)
5. Munthe-Kaas, H.: The convergence rate of inexact preconditioned steepest descent algorithm for solving linear systems. Technical report NA-87-04, Stanford University (1987)