Conditioning of Random Block Subdictionaries with Applications to Block-Sparse Recovery and Regression

The linear model has long been the mainstay of statistics and signal processing. One particular challenge for inference under linear models is understanding the conditions on the dictionary under which reliable inference is possible. This challenge has attracted renewed attention in recent years since many modern inference problems deal with the "underdetermined" setting. This paper makes several contributions for this setting when the set of observations is given by a linear combination of a small number of groups of columns of the dictionary. First, it specifies conditions on the dictionary under which most block submatrices of the dictionary are well conditioned. This result is fundamentally different from prior work because (i) it provides conditions that can be explicitly computed in polynomial time, (ii) the given conditions translate into near-optimal scaling of the number of columns of the block subdictionaries as a function of the number of observations for a large class of dictionaries, and (iii) it suggests that the spectral norm, rather than the column/block coherences, of the dictionary fundamentally limits the scaling of dimensions of the well-conditioned block subdictionaries. Second, this paper investigates the problems of block-sparse recovery and block-sparse regression in underdetermined settings. In both of these problems, this paper utilizes its result concerning conditioning of block subdictionaries and establishes that near-optimal block-sparse recovery and block-sparse regression is possible for a large class of dictionaries as long as the dictionary satisfies easily computable conditions and the coefficients describing the linear combination of groups of columns can be modeled through a mild statistical prior. Third, the paper reports extensive numerical experiments that highlight the effects of different measures of the dictionary in block-sparse inference.
View on arXiv