Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1801.06720
Cited By
v1
v2
v3
v4 (latest)
Optimal Rates for Spectral Algorithms with Least-Squares Regression over Hilbert Spaces
20 January 2018
Junhong Lin
Alessandro Rudi
Lorenzo Rosasco
Volkan Cevher
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Optimal Rates for Spectral Algorithms with Least-Squares Regression over Hilbert Spaces"
36 / 36 papers shown
Title
Learning Curves of Stochastic Gradient Descent in Kernel Regression
Haihan Zhang
Weicheng Lin
Yuanshi Liu
Cong Fang
38
0
0
28 May 2025
Regularized least squares learning with heavy-tailed noise is minimax optimal
Mattes Mollenhauer
Nicole Mücke
Dimitri Meunier
Arthur Gretton
91
0
0
20 May 2025
Divergence of Empirical Neural Tangent Kernel in Classification Problems
Zixiong Yu
Songtao Tian
Guhan Chen
70
0
0
15 Apr 2025
On the Pinsker bound of inner product kernel regression in large dimensions
Weihao Lu
Jialin Ding
Haobo Zhang
Qian Lin
93
1
0
02 Sep 2024
Random feature approximation for general spectral methods
Mike Nguyen
Nicole Mücke
52
1
0
29 Aug 2023
The SSL Interplay: Augmentations, Inductive Bias, and Generalization
Vivien A. Cabannes
B. Kiani
Randall Balestriero
Yann LeCun
A. Bietti
SSL
92
33
0
06 Feb 2023
Statistical Optimality of Divide and Conquer Kernel-based Functional Linear Regression
Jiading Liu
Lei Shi
85
12
0
20 Nov 2022
Optimal Rates for Regularized Conditional Mean Embedding Learning
Zhu Li
Dimitri Meunier
Mattes Mollenhauer
Arthur Gretton
90
52
0
02 Aug 2022
Functional linear and single-index models: A unified approach via Gaussian Stein identity
Krishnakumar Balasubramanian
Hans-Georg Müller
Bharath K. Sriperumbudur
49
6
0
08 Jun 2022
A Case of Exponential Convergence Rates for SVM
Vivien A. Cabannes
Stefano Vigogna
75
2
0
20 May 2022
Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent
Yiping Lu
Jose H. Blanchet
Lexing Ying
105
8
0
15 May 2022
Dimensionality Reduction and Wasserstein Stability for Kernel Regression
Stephan Eckstein
Armin Iske
Mathias Trabs
42
4
0
17 Mar 2022
Radial Basis Function Approximation with Distributively Stored Data on Spheres
Han Feng
Shao-Bo Lin
Ding-Xuan Zhou
32
8
0
05 Dec 2021
Data splitting improves statistical performance in overparametrized regimes
Nicole Mücke
Enrico Reiss
Jonas Rungenhagen
Markus Klein
55
8
0
21 Oct 2021
Comparing Classes of Estimators: When does Gradient Descent Beat Ridge Regression in Linear Models?
Dominic Richards
Yan Sun
Patrick Rebeschini
74
3
0
26 Aug 2021
Learning the optimal Tikhonov regularizer for inverse problems
Giovanni S. Alberti
Ernesto De Vito
Matti Lassas
Luca Ratti
Matteo Santacesaria
69
30
0
11 Jun 2021
From inexact optimization to learning via gradient concentration
Bernhard Stankewitz
Nicole Mücke
Lorenzo Rosasco
88
5
0
09 Jun 2021
Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime
Hugo Cui
Bruno Loureiro
Florent Krzakala
Lenka Zdeborová
92
85
0
31 May 2021
Sobolev Norm Learning Rates for Conditional Mean Embeddings
Prem M. Talwai
A. Shameli
D. Simchi-Levi
87
11
0
16 May 2021
Online nonparametric regression with Sobolev kernels
O. Zadorozhnyi
Pierre Gaillard
Sébastien Gerchinovitz
Alessandro Rudi
47
4
0
06 Feb 2021
Fast rates in structured prediction
Vivien A. Cabannes
Alessandro Rudi
Francis R. Bach
423
19
0
01 Feb 2021
Nonparametric approximation of conditional expectation operators
Mattes Mollenhauer
P. Koltai
93
17
0
23 Dec 2020
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime
Atsushi Nitanda
Taiji Suzuki
81
41
0
22 Jun 2020
Analyzing the discrepancy principle for kernelized spectral filter learning algorithms
Alain Celisse
Martin Wahl
65
18
0
17 Apr 2020
Distributed Learning with Dependent Samples
Zirui Sun
Shao-Bo Lin
53
7
0
10 Feb 2020
Beating SGD Saturation with Tail-Averaging and Minibatching
Nicole Mücke
Gergely Neu
Lorenzo Rosasco
106
37
0
22 Feb 2019
Beyond Least-Squares: Fast Rates for Regularized Empirical Risk Minimization through Self-Concordance
Ulysse Marteau-Ferey
Dmitrii Ostrovskii
Francis R. Bach
Alessandro Rudi
188
52
0
08 Feb 2019
Kernel Conjugate Gradient Methods with Random Projections
Bailey Kacsmar
Douglas R Stinson
56
4
0
05 Nov 2018
On Fast Leverage Score Sampling and Optimal Learning
Alessandro Rudi
Daniele Calandriello
Luigi Carratino
Lorenzo Rosasco
83
82
0
31 Oct 2018
Learning with SGD and Random Features
Luigi Carratino
Alessandro Rudi
Lorenzo Rosasco
86
78
0
17 Jul 2018
Manifold Structured Prediction
Alessandro Rudi
C. Ciliberto
Gian Maria Marconi
Lorenzo Rosasco
147
18
0
26 Jun 2018
Differential Properties of Sinkhorn Approximation for Learning with Wasserstein Distance
Giulia Luise
Alessandro Rudi
Massimiliano Pontil
C. Ciliberto
OT
91
133
0
30 May 2018
Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes
Loucas Pillaud-Vivien
Alessandro Rudi
Francis R. Bach
179
103
0
25 May 2018
Optimal Rates of Sketched-regularized Algorithms for Least-Squares Regression over Hilbert Spaces
Junhong Lin
Volkan Cevher
30
9
0
12 Mar 2018
Early stopping for kernel boosting algorithms: A general analysis with localized complexities
Yuting Wei
Fanny Yang
Martin J. Wainwright
90
77
0
05 Jul 2017
Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm
Simon Fischer
Ingo Steinwart
206
152
0
23 Feb 2017
1