Loss Gradient Gaussian Width based Generalization and Optimization Guarantees

Generalization and optimization guarantees on the population loss often rely on uniform convergence based analysis, typically based on the Rademacher complexity of the predictors. The rich representation power of modern models has led to concerns about this approach. In this paper, we present generalization and optimization guarantees in terms of the complexity of the gradients, as measured by the Loss Gradient Gaussian Width (LGGW). First, we introduce generalization guarantees directly in terms of the LGGW under a flexible gradient domination condition, which includes the popular PL (Polyak-Łojasiewicz) condition as a special case. Second, we show that sample reuse in iterative gradient descent does not make the empirical gradients deviate from the population gradients as long as the LGGW is small. Third, focusing on deep networks, we bound their single-sample LGGW in terms of the Gaussian width of the featurizer, i.e., the output of the last-but-one layer. To our knowledge, our generalization and optimization guarantees in terms of LGGW are the first results of its kind, and hold considerable promise towards quantitatively tight bounds for deep models.
View on arXiv@article{banerjee2025_2406.07712, title={ Loss Gradient Gaussian Width based Generalization and Optimization Guarantees }, author={ Arindam Banerjee and Qiaobo Li and Yingxue Zhou }, journal={arXiv preprint arXiv:2406.07712}, year={ 2025 } }