Let be any vector space of multivariate degree- homogeneous polynomials with co-dimension at most , and be the set of points where all polynomials in {\em nearly} vanish. We establish a qualitatively optimal upper bound on the size of -covers for , in the -norm. Roughly speaking, we show that there exists an -cover for of cardinality . Our result is constructive yielding an algorithm to compute such an -cover that runs in time . Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models with hidden variables. These include density and parameter estimation for -mixtures of spherical Gaussians (with known common covariance), PAC learning one-hidden-layer ReLU networks with hidden units (under the Gaussian distribution), density and parameter estimation for -mixtures of linear regressions (with Gaussian covariates), and parameter estimation for -mixtures of hyperplanes. Our algorithms run in time {\em quasi-polynomial} in the parameter . Previous algorithms for these problems had running times exponential in . At a high-level our algorithms for all these learning problems work as follows: By computing the low-degree moments of the hidden parameters, we are able to find a vector space of polynomials that nearly vanish on the unknown parameters. Our structural result allows us to compute a quasi-polynomial sized cover for the set of hidden parameters, which we exploit in our learning algorithms.
View on arXiv