Tight conditions for when the NTK approximation is valid

Abstract
We study when the neural tangent kernel (NTK) approximation is valid for training a model with the square loss. In the lazy training setting of Chizat et al. 2019, we show that rescaling the model by a factor of suffices for the NTK approximation to be valid until training time . Our bound is tight and improves on the previous bound of Chizat et al. 2019, which required a larger rescaling factor of .
View on arXivComments on this paper