Learning a Latent Simplex in Input-Sparsity Time

We consider the problem of learning a latent -vertex simplex , given access to , which can be viewed as a data matrix with points that are obtained by randomly perturbing latent points in the simplex (potentially beyond ). A large class of latent variable models, such as adversarial clustering, mixed membership stochastic block models, and topic models can be cast as learning a latent simplex. Bhattacharyya and Kannan (SODA, 2020) give an algorithm for learning such a latent simplex in time roughly , where is the number of non-zeros in . We show that the dependence on in the running time is unnecessary given a natural assumption about the mass of the top singular values of , which holds in many of these applications. Further, we show this assumption is necessary, as otherwise an algorithm for learning a latent simplex would imply an algorithmic breakthrough for spectral low rank approximation. At a high level, Bhattacharyya and Kannan provide an adaptive algorithm that makes matrix-vector product queries to and each query is a function of all queries preceding it. Since each matrix-vector product requires time, their overall running time appears unavoidable. Instead, we obtain a low-rank approximation to in input-sparsity time and show that the column space thus obtained has small (angular) distance to the right top- singular space of . Our algorithm then selects points in the low-rank subspace with the largest inner product with carefully chosen random vectors. By working in the low-rank subspace, we avoid reading the entire matrix in each iteration and thus circumvent the running time.
View on arXiv