Matching the Statistical Query Lower Bound for k-sparse Parity Problems with Stochastic Gradient Descent

The -parity problem is a classical problem in computational complexity and algorithmic theory, serving as a key benchmark for understanding computational classes. In this paper, we solve the -parity problem with stochastic gradient descent (SGD) on two-layer fully-connected neural networks. We demonstrate that SGD can efficiently solve the -sparse parity problem on a -dimensional hypercube () with a sample complexity of using neurons, thus matching the established lower bounds of Statistical Query (SQ) models. Our theoretical analysis begins by constructing a good neural network capable of correctly solving the -parity problem. We then demonstrate how a trained neural network with SGD can effectively approximate this good network, solving the -parity problem with small statistical errors. Our theoretical results and findings are supported by empirical evidence, showcasing the efficiency and efficacy of our approach.
View on arXiv