On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition
- MLTCML

We establish connections between the problem of learning a two-layers neural network with good generalization error and tensor decomposition. We consider a model with input , hidden units with weights and output , i.e., , where denotes the scalar product and the activation function. First, we show that, if we cannot learn the weights accurately, then the neural network does not generalize well. More specifically, the generalization error is close to that of a trivial predictor with access only to the norm of the input. We prove this result in a model with separated isotropic weights and in a model with random weights. In both settings, we assume that the input distribution is Gaussian, which is common in the theoretical literature. Then, we show that the problem of learning the weights is at least as hard as the problem of tensor decomposition. We prove this result for any input distribution, and we assume that the activation function is a polynomial whose degree is related to the order of the tensor to be decomposed. Hence, we obtain that learning a two-layers neural network that generalizes well is at least as hard as tensor decomposition. It has been observed that neural network models with more parameters than training samples often generalize well, even if the problem is highly underdetermined. This means that the learning algorithm does not estimate the weights accurately and yet is able to yield a good generalization error. This paper shows that such a phenomenon cannot occur with a two-layers neural network when the input distribution is Gaussian. We also provide numerical evidence supporting our theoretical findings.
View on arXiv