Affiliation:
1. Department of Industrial and Systems Engineering, Lehigh University
2. Department of Applied Mathematics and Statistics, Johns Hopkins University
Abstract
Abstract
An algorithm for solving smooth nonconvex optimization problems is proposed that, in the worst-case, takes ${\mathscr O}(\varepsilon ^{-3/2})$ iterations to drive the norm of the gradient of the objective function below a prescribed positive real number $\varepsilon $ and can take ${\mathscr O}(\varepsilon ^{-3})$ iterations to drive the leftmost eigenvalue of the Hessian of the objective above $-\varepsilon $. The proposed algorithm is a general framework that covers a wide range of techniques including quadratically and cubically regularized Newton methods, such as the Adaptive Regularization using Cubics (arc) method and the recently proposed Trust-Region Algorithm with Contractions and Expansions (trace). The generality of our method is achieved through the introduction of generic conditions that each trial step is required to satisfy, which in particular allows for inexact regularized Newton steps to be used. These conditions center around a new subproblem that can be approximately solved to obtain trial steps that satisfy the conditions. A new instance of the framework, distinct from arc and trace, is described that may be viewed as a hybrid between quadratically and cubically regularized Newton methods. Numerical results demonstrate that our hybrid algorithm outperforms a cubically regularized Newton method.
Funder
U.S. Department of Energy
Office of Science
National Science Foundation
Division of Mathematical Sciences
Publisher
Oxford University Press (OUP)
Subject
Applied Mathematics,Computational Mathematics,General Mathematics
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献