Structured Sparse Recovery via Convex Optimization
- CVBM

Recently, there has been increasing interest in recovering sparse representation of signals from a union of subspaces. We consider dictionaries that consist of multiple blocks where the atoms in each block are drawn from a linear subspace. Given a signal, which lives in the direct sum of a few subspaces, we study the problem of finding a block-sparse representation of the signal, i.e. a representation that uses the minimum number of blocks of the dictionary. Unlike existing results, we do not restrict the number of atoms in each block of the dictionary to be equal to the dimension of the corresponding subspace. Instead, motivated by signal/image processing and computer vision problems such as face recognition and motion segmentation, we allow for arbitrary number of atoms in each block, which can far exceed the dimension of the underlying subspace. To find a block-sparse representation of a signal, we consider two classes of non-convex programs which are based on minimizing a mixed quasi-norm () and consider their convex relaxations. The first class of optimization programs directly penalizes the norm of the coefficient blocks, while the second class of optimization programs penalizes the norm of the reconstructed vectors from the blocks of the dictionary. For each class of convex programs, we provide conditions based on the introduced mutual/cumulative subspace coherence of a given dictionary under which it is equivalent to the original non-convex formulation. We evaluate the performance of the two families of convex programs through simulations as well as real experiments on face recognition. We show that treating the face recognition problem as a block-sparse recovery problem and using the appropriate class of convex programs can improve the state-of-the-art face recognition results by 10% with only 25% of the training data.
View on arXiv