ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.05433
17
3

Adaptivity for Regularized Kernel Methods by Lepskii's Principle

15 April 2018
Nicole Mücke
ArXiv (abs)PDFHTML
Abstract

We address the problem of {\it adaptivity} in the framework of reproducing kernel Hilbert space (RKHS) regression. More precisely, we analyze estimators arising from a linear regularization scheme g\lamg_\lamg\lam​. In practical applications, an important task is to choose the regularization parameter \lam\lam\lam appropriately, i.e. based only on the given data and independently on unknown structural assumptions on the regression function. An attractive approach avoiding data-splitting is the {\it Lepskii Principle} (LP), also known as the {\it Balancing Principle} is this setting. We show that a modified parameter choice based on (LP) is minimax optimal adaptive, up to log⁡log⁡(n)\log\log(n)loglog(n). A convenient result is the fact that balancing in L2(ν)−L^2(\nu)-L2(ν)− norm, which is easiest, automatically gives optimal balancing in all stronger norms, interpolating between L2(ν)L^2(\nu)L2(ν) and the RKHS. An analogous result is open for other classical approaches to data dependent choices of the regularization parameter, e.g. for Hold-Out.

View on arXiv
Comments on this paper