23
10

Learning a Latent Simplex in Input-Sparsity Time

Abstract

We consider the problem of learning a latent kk-vertex simplex KRdK\subset\mathbb{R}^d, given access to ARd×nA\in\mathbb{R}^{d\times n}, which can be viewed as a data matrix with nn points that are obtained by randomly perturbing latent points in the simplex KK (potentially beyond KK). A large class of latent variable models, such as adversarial clustering, mixed membership stochastic block models, and topic models can be cast as learning a latent simplex. Bhattacharyya and Kannan (SODA, 2020) give an algorithm for learning such a latent simplex in time roughly O(knnz(A))O(k\cdot\textrm{nnz}(A)), where nnz(A)\textrm{nnz}(A) is the number of non-zeros in AA. We show that the dependence on kk in the running time is unnecessary given a natural assumption about the mass of the top kk singular values of AA, which holds in many of these applications. Further, we show this assumption is necessary, as otherwise an algorithm for learning a latent simplex would imply an algorithmic breakthrough for spectral low rank approximation. At a high level, Bhattacharyya and Kannan provide an adaptive algorithm that makes kk matrix-vector product queries to AA and each query is a function of all queries preceding it. Since each matrix-vector product requires nnz(A)\textrm{nnz}(A) time, their overall running time appears unavoidable. Instead, we obtain a low-rank approximation to AA in input-sparsity time and show that the column space thus obtained has small sinΘ\sin\Theta (angular) distance to the right top-kk singular space of AA. Our algorithm then selects kk points in the low-rank subspace with the largest inner product with kk carefully chosen random vectors. By working in the low-rank subspace, we avoid reading the entire matrix in each iteration and thus circumvent the Θ(knnz(A))\Theta(k\cdot\textrm{nnz}(A)) running time.

View on arXiv
Comments on this paper