ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.12543
19
26

A Fast, Well-Founded Approximation to the Empirical Neural Tangent Kernel

25 June 2022
Mohamad Amin Mohamadi
Wonho Bae
Danica J. Sutherland
    AAML
ArXivPDFHTML
Abstract

Empirical neural tangent kernels (eNTKs) can provide a good understanding of a given network's representation: they are often far less expensive to compute and applicable more broadly than infinite width NTKs. For networks with O output units (e.g. an O-class classifier), however, the eNTK on N inputs is of size NO×NONO \times NONO×NO, taking O((NO)2)O((NO)^2)O((NO)2) memory and up to O((NO)3)O((NO)^3)O((NO)3) computation. Most existing applications have therefore used one of a handful of approximations yielding N×NN \times NN×N kernel matrices, saving orders of magnitude of computation, but with limited to no justification. We prove that one such approximation, which we call "sum of logits", converges to the true eNTK at initialization for any network with a wide final "readout" layer. Our experiments demonstrate the quality of this approximation for various uses across a range of settings.

View on arXiv
Comments on this paper