16
2

On the Hardness of Learning One Hidden Layer Neural Networks

Abstract

In this work, we consider the problem of learning one hidden layer ReLU neural networks with inputs from Rd\mathbb{R}^d. We show that this learning problem is hard under standard cryptographic assumptions even when: (1) the size of the neural network is polynomial in dd, (2) its input distribution is a standard Gaussian, and (3) the noise is Gaussian and polynomially small in dd. Our hardness result is based on the hardness of the Continuous Learning with Errors (CLWE) problem, and in particular, is based on the largely believed worst-case hardness of approximately solving the shortest vector problem up to a multiplicative polynomial factor.

View on arXiv
Comments on this paper