ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1408.5032
  4. Cited By
On the Sample Complexity of Subspace Learning

On the Sample Complexity of Subspace Learning

Neural Information Processing Systems (NeurIPS), 2013
21 August 2014
Alessandro Rudi
Guillermo D. Cañas
Lorenzo Rosasco
ArXiv (abs)PDFHTML

Papers citing "On the Sample Complexity of Subspace Learning"

17 / 17 papers shown
Kernel $ε$-Greedy for Multi-Armed Bandits with Covariates
Kernel εεε-Greedy for Multi-Armed Bandits with Covariates
Sakshi Arya
Bharath K. Sriperumbudur
452
0
0
29 Jun 2023
A Rainbow in Deep Network Black Boxes
A Rainbow in Deep Network Black Boxes
Florentin Guth
Brice Ménard
G. Rochette
S. Mallat
456
20
0
29 May 2023
Sketch In, Sketch Out: Accelerating both Learning and Inference for
  Structured Prediction with Kernels
Sketch In, Sketch Out: Accelerating both Learning and Inference for Structured Prediction with KernelsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023
T. Ahmad
Luc Brogat-Motte
Pierre Laforgue
Florence dÁlché-Buc
BDL
401
6
0
20 Feb 2023
Vector-Valued Least-Squares Regression under Output Regularity
  Assumptions
Vector-Valued Least-Squares Regression under Output Regularity AssumptionsJournal of machine learning research (JMLR), 2022
Luc Brogat-Motte
Alessandro Rudi
Céline Brouard
Juho Rousu
Florence dÁlché-Buc
231
7
0
16 Nov 2022
Statistical Optimality and Computational Efficiency of Nyström Kernel
  PCA
Statistical Optimality and Computational Efficiency of Nyström Kernel PCAJournal of machine learning research (JMLR), 2021
Nicholas Sterge
Bharath K. Sriperumbudur
336
18
0
19 May 2021
Kernel regression in high dimensions: Refined analysis beyond double
  descent
Kernel regression in high dimensions: Refined analysis beyond double descent
Fanghui Liu
Zhenyu Liao
Johan A. K. Suykens
445
49
0
06 Oct 2020
Learning Output Embeddings in Structured Prediction
Learning Output Embeddings in Structured Prediction
Luc Brogat-Motte
Alessandro Rudi
Céline Brouard
Juho Rousu
Florence dÁlché-Buc
BDL
274
2
0
29 Jul 2020
Interpolation and Learning with Scale Dependent Kernels
Nicolò Pagliana
Alessandro Rudi
Ernesto De Vito
Lorenzo Rosasco
427
8
0
17 Jun 2020
Large-scale Kernel Methods and Applications to Lifelong Robot Learning
Large-scale Kernel Methods and Applications to Lifelong Robot Learning
Raffaello Camoriano
210
1
0
11 Dec 2019
Gain with no Pain: Efficient Kernel-PCA by Nyström Sampling
Gain with no Pain: Efficient Kernel-PCA by Nyström SamplingInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2019
Nicholas Sterge
Bharath K. Sriperumbudur
Lorenzo Rosasco
Alessandro Rudi
338
8
0
11 Jul 2019
Learning with SGD and Random Features
Learning with SGD and Random FeaturesNeural Information Processing Systems (NeurIPS), 2018
Luigi Carratino
Alessandro Rudi
Lorenzo Rosasco
301
83
0
17 Jul 2018
Statistical Optimality of Stochastic Gradient Descent on Hard Learning
  Problems through Multiple Passes
Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes
Loucas Pillaud-Vivien
Alessandro Rudi
Francis R. Bach
419
116
0
25 May 2018
Optimal Rates for Spectral Algorithms with Least-Squares Regression over
  Hilbert Spaces
Optimal Rates for Spectral Algorithms with Least-Squares Regression over Hilbert Spaces
Junhong Lin
Alessandro Rudi
Lorenzo Rosasco
Volkan Cevher
435
108
0
20 Jan 2018
Approximate Kernel PCA Using Random Features: Computational vs.
  Statistical Trade-off
Approximate Kernel PCA Using Random Features: Computational vs. Statistical Trade-off
Bharath K. Sriperumbudur
Nicholas Sterge
342
23
0
20 Jun 2017
FALKON: An Optimal Large Scale Kernel Method
FALKON: An Optimal Large Scale Kernel MethodNeural Information Processing Systems (NeurIPS), 2017
Alessandro Rudi
Luigi Carratino
Lorenzo Rosasco
482
212
0
31 May 2017
Less is More: Nyström Computational Regularization
Less is More: Nyström Computational RegularizationNeural Information Processing Systems (NeurIPS), 2015
Alessandro Rudi
Raffaello Camoriano
Lorenzo Rosasco
554
291
0
16 Jul 2015
Adaptive Randomized Dimension Reduction on Massive Data
Adaptive Randomized Dimension Reduction on Massive DataJournal of machine learning research (JMLR), 2015
Gregory Darnell
S. Georgiev
S. Mukherjee
Barbara E. Engelhardt
195
20
0
13 Apr 2015
1
Page 1 of 1