257

The Sample Complexity of Subspace Learning with Partial Information

Journal of machine learning research (JMLR), 2014
Abstract

The goal of subspace learning is to find a kk-dimensional subspace of Rd\mathbb{R}^d, such that the expected squared distance between instance vectors and the subspace is as small as possible. In this paper we study the sample complexity of subspace learning in a \emph{partial information} setting, in which the learner can only observe rdr \le d attributes from each instance vector. We derive upper and lower bounds on the sample complexity in different scenarios. In particular, our upper bounds involve a generalization of vector sampling techniques, which are often used in bandit problems, to matrices.

View on arXiv
Comments on this paper