ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.03708
  4. Cited By
Eigenvalue Decay Implies Polynomial-Time Learnability for Neural
  Networks

Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks

Neural Information Processing Systems (NeurIPS), 2017
11 August 2017
Surbhi Goel
Adam R. Klivans
ArXiv (abs)PDFHTML

Papers citing "Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks"

16 / 16 papers shown
Title
Perturbing the Derivative: Wild Refitting for Model-Free Evaluation of Machine Learning Models under Bregman Losses
Perturbing the Derivative: Wild Refitting for Model-Free Evaluation of Machine Learning Models under Bregman Losses
Haichen Hu
David Simchi-Levi
429
0
0
02 Sep 2025
Near-Interpolators: Rapid Norm Growth and the Trade-Off between
  Interpolation and Generalization
Near-Interpolators: Rapid Norm Growth and the Trade-Off between Interpolation and GeneralizationInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Yutong Wang
Rishi Sonthalia
Wei Hu
288
7
0
12 Mar 2024
Learning Graph Neural Networks with Approximate Gradient Descent
Learning Graph Neural Networks with Approximate Gradient DescentAAAI Conference on Artificial Intelligence (AAAI), 2020
Qunwei Li
Shaofeng Zou
Leon Wenliang Zhong
GNN
314
1
0
07 Dec 2020
Dissipative Deep Neural Dynamical Systems
Dissipative Deep Neural Dynamical Systems
Ján Drgoňa
Soumya Vasisht
Aaron Tuor
D. Vrabie
304
11
0
26 Nov 2020
From Boltzmann Machines to Neural Networks and Back Again
From Boltzmann Machines to Neural Networks and Back AgainNeural Information Processing Systems (NeurIPS), 2020
Surbhi Goel
Adam R. Klivans
Frederic Koehler
183
7
0
25 Jul 2020
Frequency Bias in Neural Networks for Input of Non-Uniform Density
Frequency Bias in Neural Networks for Input of Non-Uniform DensityInternational Conference on Machine Learning (ICML), 2020
Ronen Basri
Meirav Galun
Amnon Geifman
David Jacobs
Yoni Kasten
S. Kritchman
258
216
0
10 Mar 2020
Bayesian experimental design using regularized determinantal point
  processes
Bayesian experimental design using regularized determinantal point processesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2019
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
149
27
0
10 Jun 2019
On the Learnability of Deep Random Networks
On the Learnability of Deep Random Networks
Abhimanyu Das
Sreenivas Gollapudi
Ravi Kumar
Rina Panigrahy
115
8
0
08 Apr 2019
Optimal Sketching Bounds for Exp-concave Stochastic Minimization
Optimal Sketching Bounds for Exp-concave Stochastic Minimization
Naman Agarwal
Alon Gonen
218
0
0
21 May 2018
How Many Samples are Needed to Estimate a Convolutional or Recurrent
  Neural Network?
How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network?
S. Du
Yining Wang
Xiyu Zhai
Sivaraman Balakrishnan
Ruslan Salakhutdinov
Aarti Singh
SSL
207
59
0
21 May 2018
Improved Learning of One-hidden-layer Convolutional Neural Networks with
  Overlaps
Improved Learning of One-hidden-layer Convolutional Neural Networks with Overlaps
S. Du
Surbhi Goel
MLT
180
17
0
20 May 2018
Learning One Convolutional Layer with Overlapping Patches
Learning One Convolutional Layer with Overlapping Patches
Surbhi Goel
Adam R. Klivans
Raghu Meka
MLT
126
81
0
07 Feb 2018
To understand deep learning we need to understand kernel learning
To understand deep learning we need to understand kernel learning
M. Belkin
Siyuan Ma
Soumik Mandal
348
434
0
05 Feb 2018
Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of
  Spurious Local Minima
Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima
S. Du
Jason D. Lee
Yuandong Tian
Barnabás Póczós
Aarti Singh
MLT
368
241
0
03 Dec 2017
Learning Neural Networks with Two Nonlinear Layers in Polynomial Time
Learning Neural Networks with Two Nonlinear Layers in Polynomial Time
Surbhi Goel
Adam R. Klivans
335
52
0
18 Sep 2017
Convergence Analysis of Two-layer Neural Networks with ReLU Activation
Convergence Analysis of Two-layer Neural Networks with ReLU Activation
Yuanzhi Li
Yang Yuan
MLT
336
674
0
28 May 2017
1