18
0

Training NTK to Generalize with KARE

Abstract

The performance of the data-dependent neural tangent kernel (NTK; Jacot et al. (2018)) associated with a trained deep neural network (DNN) often matches or exceeds that of the full network. This implies that DNN training via gradient descent implicitly performs kernel learning by optimizing the NTK. In this paper, we propose instead to optimize the NTK explicitly. Rather than minimizing empirical risk, we train the NTK to minimize its generalization error using the recently developed Kernel Alignment Risk Estimator (KARE; Jacot et al. (2020)). Our simulations and real data experiments show that NTKs trained with KARE consistently match or significantly outperform the original DNN and the DNN- induced NTK (the after-kernel). These results suggest that explicitly trained kernels can outperform traditional end-to-end DNN optimization in certain settings, challenging the conventional dominance of DNNs. We argue that explicit training of NTK is a form of over-parametrized feature learning.

View on arXiv
@article{schwab2025_2505.11347,
  title={ Training NTK to Generalize with KARE },
  author={ Johannes Schwab and Bryan Kelly and Semyon Malamud and Teng Andrea Xu },
  journal={arXiv preprint arXiv:2505.11347},
  year={ 2025 }
}
Comments on this paper