16
v1v2 (latest)

Optimization, Generalization and Differential Privacy Bounds for Gradient Descent on Kolmogorov-Arnold Networks

Puyu Wang
Junyu Zhou
Philipp Liznerski
Marius Kloft
Main:11 Pages
4 Figures
Bibliography:4 Pages
Appendix:25 Pages
Abstract

Kolmogorov--Arnold Networks (KANs) have recently emerged as a structured alternative to standard MLPs, yet a principled theory for their training dynamics, generalization, and privacy properties remains limited. In this paper, we analyze gradient descent (GD) for training two-layer KANs and derive general bounds that characterize their training dynamics, generalization, and utility under differential privacy (DP). As a concrete instantiation, we specialize our analysis to logistic loss under an NTK-separable assumption, where we show that polylogarithmic network width suffices for GD to achieve an optimization rate of order 1/T1/T and a generalization rate of order 1/n1/n, with TT denoting the number of GD iterations and nn the sample size. In the private setting, we characterize the noise required for (ϵ,δ)(\epsilon,\delta)-DP and obtain a utility bound of order d/(nϵ)\sqrt{d}/(n\epsilon) (with dd the input dimension), matching the classical lower bound for general convex Lipschitz problems. Our results imply that polylogarithmic width is not only sufficient but also necessary under differential privacy, revealing a qualitative gap between non-private (sufficiency only) and private (necessity also emerges) training regimes. Experiments further illustrate how these theoretical insights can guide practical choices, including network width selection and early stopping.

View on arXiv
Comments on this paper