ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.00327
8
4

Beyond Independent Measurements: General Compressed Sensing with GNN Application

30 October 2021
Alireza Naderi
Y. Plan
ArXivPDFHTML
Abstract

We consider the problem of recovering a structured signal x∈Rn\mathbf{x} \in \mathbb{R}^{n}x∈Rn from noisy linear observations y=Mx+w\mathbf{y} =\mathbf{M} \mathbf{x}+\mathbf{w}y=Mx+w. The measurement matrix is modeled as M=BA\mathbf{M} = \mathbf{B}\mathbf{A}M=BA, where B∈Rl×m\mathbf{B} \in \mathbb{R}^{l \times m}B∈Rl×m is arbitrary and A∈Rm×n\mathbf{A} \in \mathbb{R}^{m \times n}A∈Rm×n has independent sub-gaussian rows. By varying B\mathbf{B}B, and the sub-gaussian distribution of A\mathbf{A}A, this gives a family of measurement matrices which may have heavy tails, dependent rows and columns, and singular values with a large dynamic range. When the structure is given as a possibly non-convex cone T⊂RnT \subset \mathbb{R}^{n}T⊂Rn, an approximate empirical risk minimizer is proven to be a robust estimator if the effective number of measurements is sufficient, even in the presence of a model mismatch. In classical compressed sensing with independent (sub-)gaussian measurements, one asks how many measurements are needed to recover x\mathbf{x}x? In our setting, however, the effective number of measurements depends on the properties of B\mathbf{B}B. We show that the effective rank of B\mathbf{B}B may be used as a surrogate for the number of measurements, and if this exceeds the squared Gaussian mean width of (T−T)∩Sn−1(T-T) \cap \mathbb{S}^{n-1}(T−T)∩Sn−1, then accurate recovery is guaranteed. Furthermore, we examine the special case of generative priors in detail, that is when x\mathbf{x}x lies close to T=ran(G)T = \mathrm{ran}(G)T=ran(G) and G:Rk→RnG: \mathbb{R}^k \rightarrow \mathbb{R}^nG:Rk→Rn is a Generative Neural Network (GNN) with ReLU activation functions. Our work relies on a recent result in random matrix theory by Jeong, Li, Plan, and Yilmaz arXiv:2001.10631. .

View on arXiv
Comments on this paper