Author:
Hauptmann Andreas,Mukherjee Subhadip,Schönlieb Carola-Bibiane,Sherry Ferdia
Abstract
AbstractRegularization is necessary when solving inverse problems to ensure the well-posedness of the solution map. Additionally, it is desired that the chosen regularization strategy is convergent in the sense that the solution map converges to a solution of the noise-free operator equation. This provides an important guarantee that stable solutions can be computed for all noise levels and that solutions satisfy the operator equation in the limit of vanishing noise. In recent years, reconstructions in inverse problems are increasingly approached from a data-driven perspective. Despite empirical success, the majority of data-driven approaches do not provide a convergent regularization strategy. One such popular example is given by iterative plug-and-play (PnP) denoising using off-the-shelf image denoisers. These usually provide only convergence of the PnP iterates to a fixed point, under suitable regularity assumptions on the denoiser, rather than convergence of the method as a regularization technique, thatis under vanishing noise and regularization strength. This paper serves two purposes: first, we provide an overview of the classical regularization theory in inverse problems and survey a few notable recent data-driven methods that are provably convergent regularization schemes. We then continue to discuss PnP algorithms and their established convergence guarantees. Subsequently, we consider PnP algorithms with learned linear denoisers and propose a novel spectral filtering technique of the denoiser to control the strength of regularization. Further, by relating the implicit regularization of the denoiser to an explicit regularization functional, we are the first to rigorously show that PnP with a learned linear denoiser leads to a convergent regularization scheme. The theoretical analysis is corroborated by numerical experiments for the classical inverse problem of tomographic image reconstruction.
Publisher
Springer Science and Business Media LLC
Reference52 articles.
1. Adler, J., Öktem, O.: Learned primal-dual reconstruction. IEEE transactions on medical imaging 37(6), 1322–1332 (2018)
2. Aharon, M., Elad, M., Bruckstein, A.M.: K-SVD: An algorithm for designing of over-complete dictionaries for sparse representation. IEEE Transactions on Signal Processing 54(11), 4311–4322 (2006)
3. Allard, W.K., Chen, G., Maggioni, M.: Multi-scale geometric methods for data sets II: Geometric multi-resolution analysis. Applied Computational and Harmonic Analysis 32(3), 435–462 (2012)
4. Amos, B., Xu, L., Kolter, J.Z.: Input convex neural networks. In: International Conference on Machine Learning, pp. 146–155 (2017)
5. Aspri, A., Banert, S., Öktem, O., Scherzer, O.: A data-driven iteratively regularized landweber iteration. Numerical Functional Analysis and Optimization 41(10), 1190-1227 (2020)