ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.04268
18
4

Optimal Rate of Kernel Regression in Large Dimensions

8 September 2023
Weihao Lu
Hao Zhang
Yicheng Li
Manyun Xu
Qian Lin
ArXivPDFHTML
Abstract

We perform a study on kernel regression for large-dimensional data (where the sample size nnn is polynomially depending on the dimension ddd of the samples, i.e., n≍dγn\asymp d^{\gamma}n≍dγ for some γ>0\gamma >0γ>0 ). We first build a general tool to characterize the upper bound and the minimax lower bound of kernel regression for large dimensional data through the Mendelson complexity εn2\varepsilon_{n}^{2}εn2​ and the metric entropy εˉn2\bar{\varepsilon}_{n}^{2}εˉn2​ respectively. When the target function falls into the RKHS associated with a (general) inner product model defined on Sd\mathbb{S}^{d}Sd, we utilize the new tool to show that the minimax rate of the excess risk of kernel regression is n−1/2n^{-1/2}n−1/2 when n≍dγn\asymp d^{\gamma}n≍dγ for γ=2,4,6,8,⋯\gamma =2, 4, 6, 8, \cdotsγ=2,4,6,8,⋯. We then further determine the optimal rate of the excess risk of kernel regression for all the γ>0\gamma>0γ>0 and find that the curve of optimal rate varying along γ\gammaγ exhibits several new phenomena including the multiple descent behavior and the periodic plateau behavior. As an application, For the neural tangent kernel (NTK), we also provide a similar explicit description of the curve of optimal rate. As a direct corollary, we know these claims hold for wide neural networks as well.

View on arXiv
Comments on this paper