ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.08619
22
31

Locality defeats the curse of dimensionality in convolutional teacher-student scenarios

16 June 2021
Alessandro Favero
Francesco Cagnetta
M. Wyart
ArXivPDFHTML
Abstract

Convolutional neural networks perform a local and translationally-invariant treatment of the data: quantifying which of these two aspects is central to their success remains a challenge. We study this problem within a teacher-student framework for kernel regression, using `convolutional' kernels inspired by the neural tangent kernel of simple convolutional architectures of given filter size. Using heuristic methods from physics, we find in the ridgeless case that locality is key in determining the learning curve exponent β\betaβ (that relates the test error ϵt∼P−β\epsilon_t\sim P^{-\beta}ϵt​∼P−β to the size of the training set PPP), whereas translational invariance is not. In particular, if the filter size of the teacher ttt is smaller than that of the student sss, β\betaβ is a function of sss only and does not depend on the input dimension. We confirm our predictions on β\betaβ empirically. We conclude by proving, using a natural universality assumption, that performing kernel regression with a ridge that decreases with the size of the training set leads to similar learning curve exponents to those we obtain in the ridgeless case.

View on arXiv
Comments on this paper