Abstract
AbstractThis paper studies several solution paths of sparse quadratic minimization problems as a function of the weighing parameter of the bi-objective of estimation loss versus solution sparsity. Three such paths are considered: the “$$\ell _0$$
ℓ
0
-path” where the discontinuous $$\ell _0$$
ℓ
0
-function provides the exact sparsity count; the “$$\ell _1$$
ℓ
1
-path” where the $$\ell _1$$
ℓ
1
-function provides a convex surrogate of sparsity count; and the “capped $$\ell _1$$
ℓ
1
-path” where the nonconvex nondifferentiable capped $$\ell _1$$
ℓ
1
-function aims to enhance the $$\ell _1$$
ℓ
1
-approximation. Serving different purposes, each of these three formulations is different from each other, both analytically and computationally. Our results deepen the understanding of (old and new) properties of the associated paths, highlight the pros, cons, and tradeoffs of these sparse optimization models, and provide numerical evidence to support the practical superiority of the capped $$\ell _1$$
ℓ
1
-path. Our study of the capped $$\ell _1$$
ℓ
1
-path is interesting in its own right as the path pertains to computable directionally stationary (= strongly locally minimizing in this context, as opposed to globally optimal) solutions of a parametric nonconvex nondifferentiable optimization problem. Motivated by classical parametric quadratic programming theory and reinforced by modern statistical learning studies, both casting an exponential perspective in fully describing such solution paths, we also aim to address the question of whether some of them can be fully traced in strongly polynomial time in the problem dimensions. A major conclusion of this paper is that a path of directional stationary solutions of the capped $$\ell _1$$
ℓ
1
-regularized problem offers interesting theoretical properties and practical compromise between the $$\ell _0$$
ℓ
0
-path and the $$\ell _1$$
ℓ
1
-path. Indeed, while the $$\ell _0$$
ℓ
0
-path is computationally prohibitive and greatly handicapped by the repeated solution of mixed-integer nonlinear programs, the quality of $$\ell _1$$
ℓ
1
-path, in terms of the two criteria—loss and sparsity—in the estimation objective, is inferior to the capped $$\ell _1$$
ℓ
1
-path; the latter can be obtained efficiently by a combination of a parametric pivoting-like scheme supplemented by an algorithm that takes advantage of the Z-matrix structure of the loss function.
Funder
National Science Foundation
Air Force Office of Scientific Research
Publisher
Springer Science and Business Media LLC
Subject
General Mathematics,Software
Reference45 articles.
1. Ahn, M., Pang, J.S., Xin, J.: Difference-of-convex learning: directional stationarity, optimality, and sparsity. SIAM J. Optim. 27(3), 1637–1665 (2017)
2. Akaike, H.: Information theory and an extension of the maximum likelihood principle. Proceeding of IEEE International Symposium on Information Theory 267–281 (1973)
3. Aneja, Y.P., Nair, K.P.K.: Bicriteria transportation problem. Manag. Sci. 25, 73–78 (1979)
4. Atamtürk, A., Gómez, A.: Strong formulations for quadratic optimzation with M-matrices and indicator variables. Math. Program. Series B 170, 141–176 (2018)
5. Atamtürk, A., Gómez, A.: Rank-one convexifications for sparse regression. arXiv:1901.10334 (2019)
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献