Author:
Wang Xiaoyu,Benning Martin
Abstract
We propose a novel framework for the regularized inversion of deep neural networks. The framework is based on the authors' recent work on training feed-forward neural networks without the differentiation of activation functions. The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables, and penalizes these variables with tailored Bregman distances. We propose a family of variational regularizations based on these Bregman distances, present theoretical results and support their practical application with numerical examples. In particular, we present the first convergence result (to the best of our knowledge) for the regularized inversion of a single-layer perceptron that only assumes that the solution of the inverse problem is in the range of the regularization operator, and that shows that the regularized inverse provably converges to the true inverse if measurement errors converge to zero.
Subject
Applied Mathematics,Statistics and Probability
Reference58 articles.
1. Deep learning;LeCun;Nature,2015
2. Deep inside convolutional networks: Visualising image classification models and saliency maps;Simonyan;Proceedings of the International Conference on Learning Representations,2014
3. Interpretable explanations of black boxes by meaningful perturbation;Fong;Proceedings of the IEEE International Conference on Computer Vision,2017
4. Explaining image classifiers by counterfactual generation
ChangCH
CreagerE
GoldenbergA
DuvenaudD
arXiv [Preprint]2019