468

Sparse Signal Processing with Linear and Non-Linear Observations: A Unified Shannon Theoretic Approach

Information Theory Workshop (ITW), 2013
Abstract

In this work we derive fundamental limits for many linear and non-linear sparse signal processing models including linear and non-linear sparse regression, group testing, multivariate regression and problems with missing features. In general, sparse signal processing problems can be characterized in terms of the following Markovian property. We are given a set of NN variables X1,X2,,XNX_1,X_2,\ldots,X_N, and there is an unknown subset of variables S{1,2,,N}S \subset \{1,2,\ldots, N\} that are \emph{relevant} for predicting outcomes/outputs YY. More specifically, when YY is conditioned on {Xn}nS\{X_n\}_{n\in S} it is conditionally independent of the other variables, {Xn}n∉S\{X_n\}_{n \not \in S}. Our goal is to identify the set SS from samples of the variables XX and the associated outcomes YY. We characterize this problem as a version of the noisy channel coding problem. Using asymptotic information theoretic analyses, we establish mutual information formulas that provide sufficient and necessary conditions on the number of samples required to successfully recover the salient variables. These mutual information expressions unify conditions for both linear and non-linear observations. We then compute sample complexity bounds for the aforementioned models, based on the mutual information expressions in order to demonstrate the applicability and flexibility of our results in general sparse signal processing models.

View on arXiv
Comments on this paper