76
20

Lower Bounds for Compressed Sensing with Generative Models

Abstract

The goal of compressed sensing is to learn a structured signal xx from a limited number of noisy linear measurements yAxy \approx Ax. In traditional compressed sensing, "structure" is represented by sparsity in some known basis. Inspired by the success of deep learning in modeling images, recent work starting with~\cite{BJPD17} has instead considered structure to come from a generative model G:RkRnG: \mathbb{R}^k \to \mathbb{R}^n. We present two results establishing the difficulty of this latter task, showing that existing bounds are tight. First, we provide a lower bound matching the~\cite{BJPD17} upper bound for compressed sensing from LL-Lipschitz generative models GG. In particular, there exists such a function that requires roughly Ω(klogL)\Omega(k \log L) linear measurements for sparse recovery to be possible. This holds even for the more relaxed goal of \emph{nonuniform} recovery. Second, we show that generative models generalize sparsity as a representation of structure. In particular, we construct a ReLU-based neural network G:R2kRnG: \mathbb{R}^{2k} \to \mathbb{R}^n with O(1)O(1) layers and O(kn)O(kn) activations per layer, such that the range of GG contains all kk-sparse vectors.

View on arXiv
Comments on this paper