Structured Sparse Recovery via Convex Optimization
- CVBM

Given a dictionary that consists of multiple blocks and a signal that lives in the range space of only a few blocks, we study the problem of finding a block-sparse representation of the signal, i.e., a representation that uses the minimum number of blocks. Existing work addresses this problem under the assumption that the number of atoms in each block is equal to the dimension of the subspace associated to that block. However, this assumption is violated in many applications in signal processing, image processing and computer vision, such as face recognition. In this paper, we consider the block-sparse reconstruction problem in the case where the number of atoms in each block is arbitrary, possibly much larger than the dimension of the underlying subspace. To find a block-sparse representation of a signal, we propose two classes of non-convex programs, which aim to minimize the number of nonzero coefficient blocks and the number of nonzero reconstructed vectors from the blocks, respectively. Since both classes of problems are NP-hard, we propose convex relaxations and derive conditions under which each class of convex programs is equivalent to the original non-convex formulation. Our conditions depend on the notions of mutual and cumulative \emph{subspace} coherence of a dictionary, which are natural generalizations of existing notions of mutual and cumulative coherence. We evaluate the performance of the proposed convex programs through simulations as well as real experiments on face recognition. We show that treating the face recognition problem as a block-sparse recovery problem can improve the state-of-the-art face recognition results by 10% with only 25% of the training data.
View on arXiv