20
0

Multiclass Loss Geometry Matters for Generalization of Gradient Descent in Separable Classification

Main:8 Pages
Bibliography:3 Pages
Appendix:16 Pages
Abstract

We study the generalization performance of unregularized gradient methods for separable linear classification. While previous work mostly deal with the binary case, we focus on the multiclass setting with kk classes and establish novel population risk bounds for Gradient Descent for loss functions that decay to zero. In this setting, we show risk bounds that reveal that convergence rates are crucially influenced by the geometry of the loss template, as formalized by Wang and Scott (2024), rather than of the loss function itself. Particularly, we establish risk upper bounds that holds for any decay rate of the loss whose template is smooth with respect to the pp-norm. In the case of exponentially decaying losses, our results indicates a contrast between the p=p=\infty case, where the risk exhibits a logarithmic dependence on kk, and p=2p=2 where the risk scales linearly with kk. To establish this separation formally, we also prove a lower bound in the latter scenario, demonstrating that the polynomial dependence on kk is unavoidable. Central to our analysis is a novel bound on the Rademacher complexity of low-noise vector-valued linear predictors with a loss template smooth w.r.t.~general pp-norms.

View on arXiv
@article{schliserman2025_2505.22359,
  title={ Multiclass Loss Geometry Matters for Generalization of Gradient Descent in Separable Classification },
  author={ Matan Schliserman and Tomer Koren },
  journal={arXiv preprint arXiv:2505.22359},
  year={ 2025 }
}
Comments on this paper