22
0

Ultra-fast feature learning for the training of two-layer neural networks in the two-timescale regime

Abstract

We study the convergence of gradient methods for the training of mean-field single hidden layer neural networks with square loss. Observing this is a separable non-linear least-square problem which is linear w.r.t. the outer layer's weights, we consider a Variable Projection (VarPro) or two-timescale learning algorithm, thereby eliminating the linear variables and reducing the learning problem to the training of the feature distribution. Whereas most convergence rates or the training of neural networks rely on a neural tangent kernel analysis where features are fixed, we show such a strategy enables provable convergence rates for the sampling of a teacher feature distribution. Precisely, in the limit where the regularization strength vanishes, we show that the dynamic of the feature distribution corresponds to a weighted ultra-fast diffusion equation. Relying on recent results on the asymptotic behavior of such PDEs, we obtain guarantees for the convergence of the trained feature distribution towards the teacher feature distribution in a teacher-student setup.

View on arXiv
@article{barboni2025_2504.18208,
  title={ Ultra-fast feature learning for the training of two-layer neural networks in the two-timescale regime },
  author={ Raphaël Barboni and Gabriel Peyré and François-Xavier Vialard },
  journal={arXiv preprint arXiv:2504.18208},
  year={ 2025 }
}
Comments on this paper