Abstract
Abstract
Measuring the error by an
{\ell^{1}}
-norm, we analyze under sparsity assumptions an
{\ell^{0}}
-regularization approach, where the penalty in the Tikhonov functional is complemented by a general stabilizing convex functional.
In this context, ill-posed operator equations
{Ax=y}
with an injective and bounded linear operator A mapping between
{\ell^{2}}
and a Banach space Y are regularized.
For sparse solutions, error estimates as well as linear and sublinear convergence rates are derived based on a variational inequality approach, where the regularization parameter can be chosen either a priori in an appropriate way or a posteriori by the sequential discrepancy principle.
To further illustrate the balance between the
{\ell^{0}}
-term and the complementing convex penalty, the important special case of the
{\ell^{2}}
-norm square penalty is investigated showing explicit dependence between both terms.
Finally, some numerical experiments verify and illustrate the sparsity promoting properties of corresponding regularized solutions.
Funder
National Natural Science Foundation of China
Natural Science Foundation of Zhejiang Province
Shanghai Municipal Education Commission
Deutsche Forschungsgemeinschaft
Reference80 articles.
1. Convergence rates for regularization with sparsity constraints;Electron. Trans. Numer. Anal.,2010
2. Convergence rates in ℓ1\ell^{1}-regularization if the sparsity assumption fails;Inverse Problems,2013
3. Modulus of continuity for conditionally stable ill-posed problems in Hilbert space;J. Inverse Ill-Posed Probl.,2008
4. Error estimation for Bregman iterations and inverse scale space methods in image restoration;Computing,2007
5. On the interplay of source conditions and variational inequalities for nonlinear ill-posed problems;Appl. Anal.,2010
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献