Abstract
AbstractWe present a new feasible proximal gradient method for constrained optimization where both the objective and constraint functions are given by summation of a smooth, possibly nonconvex function and a convex simple function. The algorithm converts the original problem into a sequence of convex subproblems. Formulating those subproblems requires the evaluation of at most one gradient-value of the original objective and constraint functions. Either exact or approximate subproblems solutions can be computed efficiently in many cases. An important feature of the algorithm is the constraint level parameter. By carefully increasing this level for each subproblem, we provide a simple solution to overcome the challenge of bounding the Lagrangian multipliers and show that the algorithm follows a strictly feasible solution path till convergence to the stationary point. We develop a simple, proximal gradient descent type analysis, showing that the complexity bound of this new algorithm is comparable to gradient descent for the unconstrained setting which is new in the literature. Exploiting this new design and analysis technique, we extend our algorithms to some more challenging constrained optimization problems where (1) the objective is a stochastic or finite-sum function, and (2) structured nonsmooth functions replace smooth components of both objective and constraint functions. Complexity results for these problems also seem to be new in the literature. Finally, our method can also be applied to convex function constrained problems where we show complexities similar to the proximal gradient method.
Funder
National Institute of Food and Agriculture
Division of Mathematical Sciences
National Natural Science Foundation of China-China Academy of General Technology Joint Fund for Basic Research
Division of Computing and Communication Foundations
Publisher
Springer Science and Business Media LLC
Reference37 articles.
1. Auslender, A., Shefi, R., Teboulle, M.: A moving balls approximation method for a class of smooth constrained minimization problems. SIAM J. Optim. 20(6), 3232–3259 (2010)
2. Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Nashua (1999)
3. Bolte, J., Pauwels, E.: Majorization-minimization procedures and convergence of sqp methods for semi-algebraic and tame programs. Math. Oper. Res. 41(2), 442–465 (2016)
4. Boob, D., Deng, Q., Lan, G.: Stochastic first-order methods for convex and nonconvex functional constrained optimization. Mathematical Programming, pp. 1–65 (2022)
5. Boob, D., Deng, Q., Lan, G., Wang, Y.: A feasible level proximal point method for nonconvex sparse constrained optimization. In Advances in Neural Information Processing Systems, volume 33, pp. 16773–16784. Curran Associates, Inc., (2020)