Abstract
Abstract
Statistical inverse learning theory, a field that lies at the intersection of inverse problems and statistical learning, has lately gained more and more attention. In an effort to steer this interplay more towards the variational regularization framework, convergence rates have recently been proved for a class of convex, p-homogeneous regularizers with p ∈ (1, 2], in the symmetric Bregman distance. Following this path, we take a further step towards the study of sparsity-promoting regularization and extend the aforementioned convergence rates to work with ℓ
p
-norm regularization, with p ∈ (1, 2), for a special class of non-tight Banach frames, called shearlets, and possibly constrained to some convex set. The p = 1 case is approached as the limit case (1, 2) ∋ p → 1, by complementing numerical evidence with a (partial) theoretical analysis, based on arguments from Γ-convergence theory. We numerically validate our theoretical results in the context of x-ray tomography, under random sampling of the imaging angles, using both simulated and measured data. This application allows to effectively verify the theoretical decay, in addition to providing a motivation for the extension to shearlet-based regularization.
Funder
Academy of Finland
Royal Society
Air Force Office of Scientific Research
Engineering and Physical Sciences Research Council
Subject
Applied Mathematics,Computer Science Applications,Mathematical Physics,Signal Processing,Theoretical Computer Science
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献