Affiliation:
1. Department of Mathematics and Khoury College of Computer Sciences Northeastern University Boston Massachusetts USA
2. Department of Computing and Mathematical Sciences California Institute of Technology Pasadena California USA
3. Helm.ai Menlo Park California USA
Abstract
AbstractAdvances in compressive sensing (CS) provided reconstruction algorithms of sparse signals from linear measurements with optimal sample complexity, but natural extensions of this methodology to nonlinear inverse problems have been met with potentially fundamental sample complexity bottlenecks. In particular, tractable algorithms for compressive phase retrieval with sparsity priors have not been able to achieve optimal sample complexity. This has created an open problem in compressive phase retrieval: under generic, phaseless linear measurements, are there tractable reconstruction algorithms that succeed with optimal sample complexity? Meanwhile, progress in machine learning has led to the development of new data‐driven signal priors in the form of generative models, which can outperform sparsity priors with significantly fewer measurements. In this work, we resolve the open problem in compressive phase retrieval and demonstrate that generative priors can lead to a fundamental advance by permitting optimal sample complexity by a tractable algorithm. We additionally provide empirics showing that exploiting generative priors in phase retrieval can significantly outperform sparsity priors. These results provide support for generative priors as a new paradigm for signal recovery in a variety of contexts, both empirically and theoretically. The strengths of this paradigm are that (1) generative priors can represent some classes of natural signals more concisely than sparsity priors, (2) generative priors allow for direct optimization over the natural signal manifold, which is intractable under sparsity priors, and (3) the resulting non‐convex optimization problems with generative priors can admit benign optimization landscapes at optimal sample complexity, perhaps surprisingly, even in cases of nonlinear measurements.
Funder
National Science Foundation
Subject
Applied Mathematics,General Mathematics
Reference81 articles.
1. A.Aberdam D.Simon andM.Elad When and how can deep generative models be inverted?(2020). Preprint ArXiv:2006.15555.
2. S.Arora Y.Liang andT.Ma Why are deep nets reversible: a simple theory with implications for training (2015). Preprint CoRR abs/1511.05653.
3. Blind Image Deconvolution Using Deep Generative Priors
4. B.Aubin B.Loureiro A.Baker F.Krzakala andL.Zdeborová Exact asymptotics for phase retrieval and compressed sensing with random generative priors Proceedings of The First Mathematical and Scientific Machine Learning Conference vol.107 PMLR Princeton (2020 pp.55–73.
5. The Spiked Matrix Model With Generative Priors