67
13

SGD Finds then Tunes Features in Two-Layer Neural Networks with near-Optimal Sample Complexity: A Case Study in the XOR problem

Abstract

In this work, we consider the optimization process of minibatch stochastic gradient descent (SGD) on a 2-layer neural network with data separated by a quadratic ground truth function. We prove that with data drawn from the dd-dimensional Boolean hypercube labeled by the quadratic ``XOR'' function y=xixjy = -x_ix_j, it is possible to train to a population error o(1)o(1) with dpolylog(d)d \:\text{polylog}(d) samples. Our result considers simultaneously training both layers of the two-layer-neural network with ReLU activations via standard minibatch SGD on the logistic loss. To our knowledge, this work is the first to give a sample complexity of O~(d)\tilde{O}(d) for efficiently learning the XOR function on isotropic data on a standard neural network with standard training. Our main technique is showing that the network evolves in two phases: a signal-finding\textit{signal-finding} phase where the network is small and many of the neurons evolve independently to find features, and a signal-heavy\textit{signal-heavy} phase, where SGD maintains and balances the features. We leverage the simultaneous training of the layers to show that it is sufficient for only a small fraction of the neurons to learn features, since those neurons will be amplified by the simultaneous growth of their second layer weights.

View on arXiv
Comments on this paper