Distribution-Specific Hardness of Learning Neural Networks
Although neural networks are routinely and successfully trained in practice using simple gradient-based methods, most existing theoretical results are negative, showing that learning such networks is difficult, in a worst-case sense over all data distributions. In this paper, we take a more nuanced view, and consider whether specific assumptions on the "niceness" of the input distribution, or "niceness" of the target function (e.g. in terms of smoothness, non-degeneracy, incoherence, random choice of parameters etc.), are sufficient to guarantee learnability using gradient-based methods. We provide evidence that neither class of assumptions alone is sufficient: For any member of a class of "nice" simple target functions, there are difficult input distributions, and on the other hand, for any member of a general class of "nice" input distributions, there are simple target functions which are difficult to learn. Thus, to formally explain the practical success of neural network learning, it seems that one would need to employ a careful combination of assumptions on both the input distribution and the target function. To prove our results, we develop some tools which may be of independent interest, such as extension of Fourier-based techniques for proving hardness in the statistical queries framework \cite{blum1994weakly}, from the Boolean cube to Euclidean space.
View on arXiv