ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.01900
  4. Cited By
Optimal rates for the regularized learning algorithms under general
  source condition
v1v2 (latest)

Optimal rates for the regularized learning algorithms under general source condition

7 November 2016
Abhishake Rastogi
Sivananthan Sampath
ArXiv (abs)PDFHTML

Papers citing "Optimal rates for the regularized learning algorithms under general source condition"

17 / 17 papers shown
Title
Regularized least squares learning with heavy-tailed noise is minimax optimal
Regularized least squares learning with heavy-tailed noise is minimax optimal
Mattes Mollenhauer
Nicole Mücke
Dimitri Meunier
Arthur Gretton
91
0
0
20 May 2025
On the Saturation Effects of Spectral Algorithms in Large Dimensions
Weihao Lu
Haobo Zhang
Yicheng Li
Q. Lin
112
2
0
01 Mar 2025
On the Saturation Effect of Kernel Ridge Regression
On the Saturation Effect of Kernel Ridge Regression
Yicheng Li
Haobo Zhang
Qian Lin
164
21
0
15 May 2024
Generalization error of spectral algorithms
Generalization error of spectral algorithms
Maksim Velikanov
Maxim Panov
Dmitry Yarotsky
81
1
0
18 Mar 2024
Asymptotic Theory for Linear Functionals of Kernel Ridge Regression
Asymptotic Theory for Linear Functionals of Kernel Ridge Regression
Rui Tuo
Lu Zou
57
0
0
07 Mar 2024
On the Optimality of Misspecified Kernel Ridge Regression
On the Optimality of Misspecified Kernel Ridge Regression
Haobo Zhang
Yicheng Li
Weihao Lu
Qian Lin
122
14
0
12 May 2023
On the Optimality of Misspecified Spectral Algorithms
On the Optimality of Misspecified Spectral Algorithms
Hao Zhang
Yicheng Li
Qian Lin
79
18
0
27 Mar 2023
Optimal Learning Rates for Regularized Least-Squares with a Fourier
  Capacity Condition
Optimal Learning Rates for Regularized Least-Squares with a Fourier Capacity Condition
Prem M. Talwai
D. Simchi-Levi
35
2
0
16 Apr 2022
Nonparametric approximation of conditional expectation operators
Nonparametric approximation of conditional expectation operators
Mattes Mollenhauer
P. Koltai
99
17
0
23 Dec 2020
Stochastic Gradient Descent in Hilbert Scales: Smoothness,
  Preconditioning and Earlier Stopping
Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping
Nicole Mücke
Enrico Reiss
47
7
0
18 Jun 2020
Inverse learning in Hilbert scales
Inverse learning in Hilbert scales
Abhishake Rastogi
Peter Mathé
28
6
0
24 Feb 2020
Tikhonov regularization with oversmoothing penalty for nonlinear
  statistical inverse problems
Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
Abhishake Rastogi
63
5
0
01 Feb 2020
Lepskii Principle in Supervised Learning
Lepskii Principle in Supervised Learning
Gilles Blanchard
Peter Mathé
Nicole Mücke
60
12
0
26 May 2019
Convergence analysis of Tikhonov regularization for non-linear
  statistical inverse learning problems
Convergence analysis of Tikhonov regularization for non-linear statistical inverse learning problems
Abhishake Rastogi
Gilles Blanchard
Peter Mathé
30
8
0
14 Feb 2019
Adaptivity for Regularized Kernel Methods by Lepskii's Principle
Adaptivity for Regularized Kernel Methods by Lepskii's Principle
Nicole Mücke
19
3
0
15 Apr 2018
Optimal Rates for Spectral Algorithms with Least-Squares Regression over
  Hilbert Spaces
Optimal Rates for Spectral Algorithms with Least-Squares Regression over Hilbert Spaces
Junhong Lin
Alessandro Rudi
Lorenzo Rosasco
Volkan Cevher
185
99
0
20 Jan 2018
Manifold regularization based on Nystr{ö}m type subsampling
Manifold regularization based on Nystr{ö}m type subsampling
Abhishake Rastogi
Sivananthan Sampath
55
4
0
13 Oct 2017
1