Abstract
AbstractNeural networks functions are supposed to be able to encode the desired solution of an inverse problem very efficiently. In this paper, we consider the problem of solving linear inverse problems with neural network coders. First we establish some correspondences of this formulation with existing concepts in regularization theory, in particular with state space regularization, operator decomposition and iterative regularization methods. A Gauss–Newton method is suitable for solving encoded linear inverse problems, which is supported by a local convergence result. The convergence studies, however, are not complete, and are based on a conjecture on linear independence of activation functions and its derivatives. Some numerical experiments are presented to support the theoretical findings.
Publisher
Springer Science and Business Media LLC
Subject
Computational Mathematics,Radiology, Nuclear Medicine and imaging,Signal Processing,Algebra and Number Theory,Analysis
Reference48 articles.
1. Aspri, A., Frischauf, L., Korolev, Y., Scherzer, O.: Data driven reconstruction using frames and Riesz bases. In: Jadamba, B., Khan, A.A., Migórski, S., Sama, M. (eds.) Deterministic and Stochastic Optimal Control and Inverse Problems, pp. 303–318. CRC Press, Boca Raton (2021). https://doi.org/10.1201/9781003050575-13
2. Aspri, A., Korolev, Y., Scherzer, O.: Data driven regularization by projection. Inverse Probl. 36(12), 125009 (2020). https://doi.org/10.1088/1361-6420/abb61b
3. Bakushinskii, A., Goncharskii, A.: Ill-Posed Problems: Theory and Applications. Kluwer Academic Publishers, Dordrecht (1994)
4. Bakushinskii, A., Goncharskii, A.: Iterative Methods for the Solution of Incorrect Problems. Nauka, Moscow (1989)
5. Blaschke, B.: Some Newton Type Methods for the Regularization of Nonlinear Ill-Posed Problems. Universitätsverlag Rudolf Trauner, Linz (1996)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献