Author:
Daniels Max,Gerbelot Cédric,Krzakala Florent,Zdeborová Lenka
Abstract
Abstract
Signal recovery under generative neural network priors has emerged as a promising direction in statistical inference and computational imaging. Theoretical analysis of reconstruction algorithms under generative priors is, however, challenging. For generative priors with fully connected layers and Gaussian i.i.d. weights, this was achieved by the multi-layer approximate message (ML-AMP) algorithm via a rigorous state evolution. However, practical generative priors are typically convolutional, allowing for computational benefits and inductive biases, and so the Gaussian i.i.d. weight assumption is very limiting. In this paper, we overcome this limitation and establish the state evolution of ML-AMP for random convolutional layers. We prove in particular that random convolutional layers belong to the same universality class as Gaussian matrices. Our proof technique is of an independent interest as it establishes a mapping between convolutional matrices and spatially coupled sensing matrices used in coding theory.
Subject
Statistics, Probability and Uncertainty,Statistics and Probability,Statistical and Nonlinear Physics
Reference36 articles.
1. Invertible generative models for inverse problems: mitigating representation error and dataset bias;Asim,2020
2. The spiked matrix model with generative priors;Aubin,2019
3. Tramp: compositional inference with tree approximate message passing;Baker,2020
4. Approximate message-passing with spatially coupled structured operators, with applications to compressed sensing and sparse superposition codes;Barbier;J. Stat. Mech.,2015
5. The dynamics of message passing on dense graphs, with applications to compressed sensing;Bayati;IEEE Trans. Inf. Theory,2011