On the Hardness of Learning One Hidden Layer Neural Networks

Abstract
In this work, we consider the problem of learning one hidden layer ReLU neural networks with inputs from . We show that this learning problem is hard under standard cryptographic assumptions even when: (1) the size of the neural network is polynomial in , (2) its input distribution is a standard Gaussian, and (3) the noise is Gaussian and polynomially small in . Our hardness result is based on the hardness of the Continuous Learning with Errors (CLWE) problem, and in particular, is based on the largely believed worst-case hardness of approximately solving the shortest vector problem up to a multiplicative polynomial factor.
View on arXivComments on this paper