Abstract
AbstractSparsity finds applications in diverse areas such as statistics, machine learning, and signal processing. Computations over sparse structures are less complex compared to their dense counterparts and need less storage. This paper proposes a heuristic method for retrieving sparse approximate solutions of optimization problems via minimizing the $$\ell _{p}$$
ℓ
p
quasi-norm, where $$0<p<1$$
0
<
p
<
1
. An iterative two-block algorithm for minimizing the $$\ell _{p}$$
ℓ
p
quasi-norm subject to convex constraints is proposed. The proposed algorithm requires solving for the roots of a scalar degree polynomial as opposed to applying a soft thresholding operator in the case of $$\ell _{1}$$
ℓ
1
norm minimization. The algorithm’s merit relies on its ability to solve the $$\ell _{p}$$
ℓ
p
quasi-norm minimization subject to any convex constraints set. For the specific case of constraints defined by differentiable functions with Lipschitz continuous gradient, a second, faster algorithm is proposed. Using a proximal gradient step, we mitigate the convex projection step and hence enhance the algorithm’s speed while proving its convergence. We present various applications where the proposed algorithm excels, namely, sparse signal reconstruction, system identification, and matrix completion. The results demonstrate the significant gains obtained by the proposed algorithm compared to other $$\ell _{p}$$
ℓ
p
quasi-norm based methods presented in previous literature.
Funder
Foundation for the National Institutes of Health
National Science Foundation
Publisher
Springer Science and Business Media LLC
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献