Rate-optimal denoising with deep neural networks

Author:

Heckel Reinhard1,Huang Wen2,Hand Paul3,Voroninski Vladislav4

Affiliation:

1. Department of Electrical and Computer Engineering, Rice University, Houston, Texas

2. School of Mathematical Sciences, Xiamen University, Xiamen, China

3. Department of Mathematics and Khoury College of Computer Science, Northeastern University, Boston, Massachusetts

4. Helm.ai, Menlo Park, California USA

Abstract

Abstract Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy observation. The underlying principle is that neural networks trained on large data sets have empirically been shown to be able to generate natural images well from a low-dimensional latent representation of the image. Given such a generator network, a noisy image can be denoised by (i) finding the closest image in the range of the generator or by (ii) passing it through an encoder-generator architecture (known as an autoencoder). However, there is little theory to justify this success, let alone to predict the denoising performance as a function of the network parameters. In this paper, we consider the problem of denoising an image from additive Gaussian noise using the two generator-based approaches. In both cases, we assume the image is well described by a deep neural network with ReLU activations functions, mapping a $k$-dimensional code to an $n$-dimensional image. In the case of the autoencoder, we show that the feedforward network reduces noise energy by a factor of $O(k/n)$. In the case of optimizing over the range of a generative model, we state and analyze a simple gradient algorithm that minimizes a non-convex loss function and provably reduces noise energy by a factor of $O(k/n)$. We also demonstrate in numerical experiments that this denoising performance is, indeed, achieved by generative priors learned from data.

Funder

NSF

NSF CAREER

Publisher

Oxford University Press (OUP)

Subject

Applied Mathematics,Computational Theory and Mathematics,Numerical Analysis,Statistics and Probability,Analysis

Reference23 articles.

1. Why are deep nets reversible: as simple theory, with implications for training;Arora,2015

2. Compressed sensing using generative models;Bora,2017

3. Image denoising: can plain neural networks compete with BM3D?;Burger,2012

4. Nonsmooth analysis and optimization. Lecture Notes.;Clason,2017

5. Image denoising by sparse 3-D transform-domain collaborative filtering;Dabov;IEEE Trans. Image Process.,2007

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Compressive phase retrieval: Optimal sample complexity with deep generative priors;Communications on Pure and Applied Mathematics;2023-09-11

2. Discovering Structure From Corruption for Unsupervised Image Reconstruction;IEEE Transactions on Computational Imaging;2023

3. Sparsity-Free Compressed Sensing With Applications to Generative Priors;IEEE Journal on Selected Areas in Information Theory;2022-09

4. Theoretical Perspectives on Deep Learning Methods in Inverse Problems;IEEE Journal on Selected Areas in Information Theory;2022-09

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3