Affiliation:
1. Department of Industrial Systems Engineering and Management, National University of Singapore, Singapore 117576;
2. Department of Electrical and Computer Engineering, University of Minnesota Twin Cities Minneapolis, Minnesota 55455
Abstract
Most first-order methods rely on the global Lipschitz continuity of the objective gradient, which fails to hold in many problems. This paper develops a sequential local optimization (SLO) framework for first-order algorithms to optimize problems without Lipschitz gradient. Operating on the assumption that the gradient is locally Lipschitz continuous over any compact set, SLO develops a careful scheme to control the distance between successive iterates. The proposed framework can easily adapt to the existing first-order methods, such as projected gradient descent (PGD), truncated gradient descent (TGD), and a parameter-free variant of Armijo linesearch. We show that SLO requires [Formula: see text] gradient evaluations to find an ϵ-stationary point, where Y is certain compact set with [Formula: see text] radius, and [Formula: see text] denotes the Lipschitz constant of the i-th order derivatives in Y. It is worth noting that our analysis provides the first nonasymptotic convergence rate for the (slight variant of) Armijo linesearch algorithm without globally Lipschitz continuous gradient or convexity. As a generic framework, we also show that SLO can incorporate more complicated subroutines, such as a variant of the accelerated gradient descent (AGD) method that can harness the problem’s second-order smoothness without Hessian computation, which achieves an improved [Formula: see text] complexity. Funding: J. Zhang is supported by the MOE AcRF [Grant A-0009530-04-00], from Singapore Ministry of Education. M. Hong is supported by NSF [Grants CIF-1910385 and EPCN-2311007]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/ijoo.2021.0029 .
Publisher
Institute for Operations Research and the Management Sciences (INFORMS)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. A Smoothed Bregman Proximal Gradient Algorithm for Decentralized Nonconvex Optimization;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14