Curse of Dimensionality in Neural Network Optimization
The curse of dimensionality in neural network optimization under the mean-field regime is studied. It is demonstrated that when a shallow neural network with a Lipschitz continuous activation function is trained using either empirical or population risk to approximate a target function that is times continuously differentiable on , the population risk may not decay at a rate faster than , where is an analog of the total number of optimization iterations. This result highlights the presence of the curse of dimensionality in the optimization computation required to achieve a desired accuracy. Instead of analyzing parameter evolution directly, the training dynamics are examined through the evolution of the parameter distribution under the 2-Wasserstein gradient flow. Furthermore, it is established that the curse of dimensionality persists when a locally Lipschitz continuous activation function is employed, where the Lipschitz constant in is bounded by for any . In this scenario, the population risk is shown to decay at a rate no faster than . To the best of our knowledge, this work is the first to analyze the impact of function smoothness on the curse of dimensionality in neural network optimization theory.
View on arXiv@article{na2025_2502.05360, title={ Curse of Dimensionality in Neural Network Optimization }, author={ Sanghoon Na and Haizhao Yang }, journal={arXiv preprint arXiv:2502.05360}, year={ 2025 } }