ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1604.04054
  4. Cited By
Optimal Rates For Regularization Of Statistical Inverse Learning
  Problems

Optimal Rates For Regularization Of Statistical Inverse Learning Problems

14 April 2016
Gilles Blanchard
Nicole Mücke
ArXiv (abs)PDFHTML

Papers citing "Optimal Rates For Regularization Of Statistical Inverse Learning Problems"

48 / 98 papers shown
Title
Data splitting improves statistical performance in overparametrized
  regimes
Data splitting improves statistical performance in overparametrized regimes
Nicole Mücke
Enrico Reiss
Jonas Rungenhagen
Markus Klein
60
8
0
21 Oct 2021
Comparing Classes of Estimators: When does Gradient Descent Beat Ridge
  Regression in Linear Models?
Comparing Classes of Estimators: When does Gradient Descent Beat Ridge Regression in Linear Models?
Dominic Richards
Yan Sun
Patrick Rebeschini
74
3
0
26 Aug 2021
Beyond Tikhonov: Faster Learning with Self-Concordant Losses via
  Iterative Regularization
Beyond Tikhonov: Faster Learning with Self-Concordant Losses via Iterative Regularization
Gaspard Beugnot
Julien Mairal
Alessandro Rudi
55
1
0
16 Jun 2021
Learning the optimal Tikhonov regularizer for inverse problems
Learning the optimal Tikhonov regularizer for inverse problems
Giovanni S. Alberti
Ernesto De Vito
Matti Lassas
Luca Ratti
Matteo Santacesaria
76
30
0
11 Jun 2021
From inexact optimization to learning via gradient concentration
From inexact optimization to learning via gradient concentration
Bernhard Stankewitz
Nicole Mücke
Lorenzo Rosasco
88
5
0
09 Jun 2021
Learning particle swarming models from data with Gaussian processes
Learning particle swarming models from data with Gaussian processes
Jinchao Feng
Charles Kulick
Yunxiang Ren
Sui Tang
79
6
0
04 Jun 2021
Two-layer neural networks with values in a Banach space
Two-layer neural networks with values in a Banach space
Yury Korolev
108
24
0
05 May 2021
Convergence of Gaussian process regression: Optimality, robustness, and
  relationship with kernel ridge regression
Convergence of Gaussian process regression: Optimality, robustness, and relationship with kernel ridge regression
Wei Cao
Bing-Yi Jing
57
6
0
20 Apr 2021
Convex regularization in statistical inverse learning problems
Convex regularization in statistical inverse learning problems
T. Bubba
Martin Burger
T. Helin
Luca Ratti
56
10
0
18 Feb 2021
Online nonparametric regression with Sobolev kernels
Online nonparametric regression with Sobolev kernels
O. Zadorozhnyi
Pierre Gaillard
Sébastien Gerchinovitz
Alessandro Rudi
47
4
0
06 Feb 2021
Nonparametric approximation of conditional expectation operators
Nonparametric approximation of conditional expectation operators
Mattes Mollenhauer
P. Koltai
95
17
0
23 Dec 2020
Stochastic Gradient Descent Meets Distribution Regression
Stochastic Gradient Descent Meets Distribution Regression
Nicole Mücke
62
5
0
24 Oct 2020
Decentralised Learning with Random Features and Distributed Gradient
  Descent
Decentralised Learning with Random Features and Distributed Gradient Descent
Dominic Richards
Patrick Rebeschini
Lorenzo Rosasco
67
18
0
01 Jul 2020
Optimal Rates of Distributed Regression with Imperfect Kernels
Optimal Rates of Distributed Regression with Imperfect Kernels
Hongwei Sun
Qiang Wu
30
15
0
30 Jun 2020
Optimal Rates for Averaged Stochastic Gradient Descent under Neural
  Tangent Kernel Regime
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime
Atsushi Nitanda
Taiji Suzuki
91
41
0
22 Jun 2020
Stochastic Gradient Descent in Hilbert Scales: Smoothness,
  Preconditioning and Earlier Stopping
Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping
Nicole Mücke
Enrico Reiss
47
7
0
18 Jun 2020
Interpolation and Learning with Scale Dependent Kernels
Nicolò Pagliana
Alessandro Rudi
Ernesto De Vito
Lorenzo Rosasco
91
8
0
17 Jun 2020
Construction and Monte Carlo estimation of wavelet frames generated by a
  reproducing kernel
Construction and Monte Carlo estimation of wavelet frames generated by a reproducing kernel
Ernesto De Vito
Ž. Kereta
Valeriya Naumova
Lorenzo Rosasco
Stefano Vigogna
74
3
0
17 Jun 2020
Sample complexity and effective dimension for regression on manifolds
Sample complexity and effective dimension for regression on manifolds
Andrew D. McRae
Justin Romberg
Mark A. Davenport
106
8
0
13 Jun 2020
Lower bounds for invariant statistical models with applications to
  principal component analysis
Lower bounds for invariant statistical models with applications to principal component analysis
Martin Wahl
32
5
0
14 May 2020
Analyzing the discrepancy principle for kernelized spectral filter
  learning algorithms
Analyzing the discrepancy principle for kernelized spectral filter learning algorithms
Alain Celisse
Martin Wahl
65
18
0
17 Apr 2020
A Spectral Analysis of Dot-product Kernels
A Spectral Analysis of Dot-product Kernels
M. Scetbon
Zaïd Harchaoui
419
2
0
28 Feb 2020
Inverse learning in Hilbert scales
Inverse learning in Hilbert scales
Abhishake Rastogi
Peter Mathé
23
6
0
24 Feb 2020
A Measure-Theoretic Approach to Kernel Conditional Mean Embeddings
A Measure-Theoretic Approach to Kernel Conditional Mean Embeddings
Junhyung Park
Krikamol Muandet
124
84
0
10 Feb 2020
Tikhonov regularization with oversmoothing penalty for nonlinear
  statistical inverse problems
Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
Abhishake Rastogi
58
5
0
01 Feb 2020
On the Improved Rates of Convergence for Matérn-type Kernel Ridge
  Regression, with Application to Calibration of Computer Models
On the Improved Rates of Convergence for Matérn-type Kernel Ridge Regression, with Application to Calibration of Computer Models
Rui Tuo
Yan Wang
C. F. Jeff Wu
67
28
0
01 Jan 2020
Implicit Regularization of Accelerated Methods in Hilbert Spaces
Implicit Regularization of Accelerated Methods in Hilbert Spaces
Nicolò Pagliana
Lorenzo Rosasco
103
18
0
30 May 2019
Lepskii Principle in Supervised Learning
Lepskii Principle in Supervised Learning
Gilles Blanchard
Peter Mathé
Nicole Mücke
55
12
0
26 May 2019
Optimal Statistical Rates for Decentralised Non-Parametric Regression
  with Linear Speed-Up
Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up
Dominic Richards
Patrick Rebeschini
69
13
0
08 May 2019
Beating SGD Saturation with Tail-Averaging and Minibatching
Beating SGD Saturation with Tail-Averaging and Minibatching
Nicole Mücke
Gergely Neu
Lorenzo Rosasco
106
37
0
22 Feb 2019
Convergence analysis of Tikhonov regularization for non-linear
  statistical inverse learning problems
Convergence analysis of Tikhonov regularization for non-linear statistical inverse learning problems
Abhishake Rastogi
Gilles Blanchard
Peter Mathé
28
8
0
14 Feb 2019
The empirical process of residuals from an inverse regression
The empirical process of residuals from an inverse regression
T. Kutta
N. Bissantz
J. Chown
Holger Dette
MedIm
49
2
0
09 Feb 2019
Beyond Least-Squares: Fast Rates for Regularized Empirical Risk
  Minimization through Self-Concordance
Beyond Least-Squares: Fast Rates for Regularized Empirical Risk Minimization through Self-Concordance
Ulysse Marteau-Ferey
Dmitrii Ostrovskii
Francis R. Bach
Alessandro Rudi
201
52
0
08 Feb 2019
A note on the prediction error of principal component regression
A note on the prediction error of principal component regression
Yana B. Feygin
115
6
0
07 Nov 2018
Kernel Conjugate Gradient Methods with Random Projections
Kernel Conjugate Gradient Methods with Random Projections
Bailey Kacsmar
Douglas R Stinson
61
4
0
05 Nov 2018
Statistical Optimality of Stochastic Gradient Descent on Hard Learning
  Problems through Multiple Passes
Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes
Loucas Pillaud-Vivien
Alessandro Rudi
Francis R. Bach
185
103
0
25 May 2018
Adaptivity for Regularized Kernel Methods by Lepskii's Principle
Adaptivity for Regularized Kernel Methods by Lepskii's Principle
Nicole Mücke
17
3
0
15 Apr 2018
Optimal Rates of Sketched-regularized Algorithms for Least-Squares
  Regression over Hilbert Spaces
Optimal Rates of Sketched-regularized Algorithms for Least-Squares Regression over Hilbert Spaces
Junhong Lin
Volkan Cevher
32
9
0
12 Mar 2018
Optimal Convergence for Distributed Learning with Stochastic Gradient
  Methods and Spectral Algorithms
Optimal Convergence for Distributed Learning with Stochastic Gradient Methods and Spectral Algorithms
Junhong Lin
Volkan Cevher
82
34
0
22 Jan 2018
Optimal Rates for Spectral Algorithms with Least-Squares Regression over
  Hilbert Spaces
Optimal Rates for Spectral Algorithms with Least-Squares Regression over Hilbert Spaces
Junhong Lin
Alessandro Rudi
Lorenzo Rosasco
Volkan Cevher
185
99
0
20 Jan 2018
Concentration of weakly dependent Banach-valued sums and applications to
  statistical learning methods
Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods
Gilles Blanchard
O. Zadorozhnyi
47
7
0
05 Dec 2017
Reducing training time by efficient localized kernel regression
Reducing training time by efficient localized kernel regression
Nicole Mücke
54
11
0
11 Jul 2017
Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm
Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm
Simon Fischer
Ingo Steinwart
206
152
0
23 Feb 2017
Kernel regression, minimax rates and effective dimensionality: beyond
  the regular case
Kernel regression, minimax rates and effective dimensionality: beyond the regular case
Gilles Blanchard
Nicole Mücke
53
9
0
12 Nov 2016
Optimal rates for the regularized learning algorithms under general
  source condition
Optimal rates for the regularized learning algorithms under general source condition
Abhishake Rastogi
Sivananthan Sampath
77
31
0
07 Nov 2016
Parallelizing Spectral Algorithms for Kernel Learning
Parallelizing Spectral Algorithms for Kernel Learning
Gilles Blanchard
Nicole Mücke
67
15
0
24 Oct 2016
Convergence rates of Kernel Conjugate Gradient for random design
  regression
Convergence rates of Kernel Conjugate Gradient for random design regression
Gilles Blanchard
Nicole E. Kramer
81
38
0
08 Jul 2016
Weighted Residuals for Very Deep Networks
Weighted Residuals for Very Deep Networks
Lee H. Dicker
Daniel J. Hsu
32
47
0
28 May 2016
Previous
12