14
24

Robust One-Bit Recovery via ReLU Generative Networks: Near-Optimal Statistical Rate and Global Landscape Analysis

Abstract

We study the robust one-bit compressed sensing problem whose goal is to design an algorithm that faithfully recovers any sparse target vector θ0Rd\theta_0\in\mathbb{R}^d \textit{uniformly} via mm quantized noisy measurements. Specifically, we consider a new framework for this problem where the sparsity is implicitly enforced via mapping a low dimensional representation x0Rkx_0 \in \mathbb{R}^k through a known nn-layer ReLU generative network G:RkRdG:\mathbb{R}^k\rightarrow\mathbb{R}^d such that θ0=G(x0)\theta_0 = G(x_0). Such a framework poses low-dimensional priors on θ0\theta_0 without a known sparsity basis. We propose to recover the target G(x0)G(x_0) solving an unconstrained empirical risk minimization (ERM). Under a weak \textit{sub-exponential measurement assumption}, we establish a joint statistical and computational analysis. In particular, we prove that the ERM estimator in this new framework achieves a statistical rate of m=O~(knlogd/ε2)m=\widetilde{\mathcal{O}}(kn \log d /\varepsilon^2) recovering any G(x0)G(x_0) uniformly up to an error ε\varepsilon. When the network is shallow (i.e., nn is small), we show this rate matches the information-theoretic lower bound up to logarithm factors of ε1\varepsilon^{-1}. From the lens of computation, we prove that under proper conditions on the network weights, our proposed empirical risk, despite non-convexity, has no stationary point outside of small neighborhoods around the true representation x0x_0 and its negative multiple; furthermore, we show that the global minimizer of the empirical risk stays within the neighborhood around x0x_0 rather than its negative multiple under further assumptions on the network weights.

View on arXiv
Comments on this paper