Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials

We study the problem of PAC learning a linear combination of ReLU activations under the standard Gaussian distribution on with respect to the square loss. Our main result is an efficient algorithm for this learning task with sample and computational complexity , where is the target accuracy. Prior work had given an algorithm for this problem with complexity , where the function scales super-polynomially in . Interestingly, the complexity of our algorithm is near-optimal within the class of Correlational Statistical Query algorithms. At a high-level, our algorithm uses tensor decomposition to identify a subspace such that all the -order moments are small in the orthogonal directions. Its analysis makes essential use of the theory of Schur polynomials to show that the higher-moment error tensors are small given that the lower-order ones are.
View on arXiv