ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.03708
  4. Cited By
Eigenvalue Decay Implies Polynomial-Time Learnability for Neural
  Networks

Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks

Neural Information Processing Systems (NeurIPS), 2017
11 August 2017
Surbhi Goel
Adam R. Klivans
ArXiv (abs)PDFHTML

Papers citing "Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks"

21 / 21 papers shown
Doubly Wild Refitting: Model-Free Evaluation of High Dimensional Black-Box Predictions under Convex Losses
Doubly Wild Refitting: Model-Free Evaluation of High Dimensional Black-Box Predictions under Convex Losses
Haichen Hu
David Simchi-Levi
84
0
0
24 Nov 2025
Perturbing the Derivative: Wild Refitting for Model-Free Evaluation of Machine Learning Models under Bregman Losses
Perturbing the Derivative: Wild Refitting for Model-Free Evaluation of Machine Learning Models under Bregman Losses
Haichen Hu
David Simchi-Levi
468
0
0
02 Sep 2025
Quantum AI for Alzheimer's disease early screening
Quantum AI for Alzheimer's disease early screening
Giacomo Cappiello
Filippo Caruso
129
6
0
01 May 2024
Near-Interpolators: Rapid Norm Growth and the Trade-Off between
  Interpolation and Generalization
Near-Interpolators: Rapid Norm Growth and the Trade-Off between Interpolation and GeneralizationInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Yutong Wang
Rishi Sonthalia
Wei Hu
363
7
0
12 Mar 2024
On the Stochastic Stability of Deep Markov Models
On the Stochastic Stability of Deep Markov Models
Ján Drgoňa
Sayak Mukherjee
Jiaxin Zhang
Frank Liu
M. Halappanavar
BDL
266
9
0
08 Nov 2021
Learning Graph Neural Networks with Approximate Gradient Descent
Learning Graph Neural Networks with Approximate Gradient DescentAAAI Conference on Artificial Intelligence (AAAI), 2020
Qunwei Li
Shaofeng Zou
Leon Wenliang Zhong
GNN
359
1
0
07 Dec 2020
Dissipative Deep Neural Dynamical Systems
Dissipative Deep Neural Dynamical Systems
Ján Drgoňa
Soumya Vasisht
Aaron Tuor
D. Vrabie
328
14
0
26 Nov 2020
From Boltzmann Machines to Neural Networks and Back Again
From Boltzmann Machines to Neural Networks and Back AgainNeural Information Processing Systems (NeurIPS), 2020
Surbhi Goel
Adam R. Klivans
Frederic Koehler
220
7
0
25 Jul 2020
Frequency Bias in Neural Networks for Input of Non-Uniform Density
Frequency Bias in Neural Networks for Input of Non-Uniform DensityInternational Conference on Machine Learning (ICML), 2020
Ronen Basri
Meirav Galun
Amnon Geifman
David Jacobs
Yoni Kasten
S. Kritchman
287
219
0
10 Mar 2020
Bayesian experimental design using regularized determinantal point
  processes
Bayesian experimental design using regularized determinantal point processesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2019
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
162
27
0
10 Jun 2019
On the Learnability of Deep Random Networks
On the Learnability of Deep Random Networks
Abhimanyu Das
Sreenivas Gollapudi
Ravi Kumar
Rina Panigrahy
141
8
0
08 Apr 2019
Recovering the Lowest Layer of Deep Networks with High Threshold
  Activations
Recovering the Lowest Layer of Deep Networks with High Threshold Activations
Surbhi Goel
Rina Panigrahy
FedML
166
0
0
21 Mar 2019
Towards a Theoretical Understanding of Hashing-Based Neural Nets
Towards a Theoretical Understanding of Hashing-Based Neural Nets
Yibo Lin
Zhao Song
Lin F. Yang
164
7
0
26 Dec 2018
Optimal Sketching Bounds for Exp-concave Stochastic Minimization
Optimal Sketching Bounds for Exp-concave Stochastic Minimization
Naman Agarwal
Alon Gonen
327
0
0
21 May 2018
How Many Samples are Needed to Estimate a Convolutional or Recurrent
  Neural Network?
How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network?
S. Du
Yining Wang
Xiyu Zhai
Sivaraman Balakrishnan
Ruslan Salakhutdinov
Aarti Singh
SSL
266
60
0
21 May 2018
Improved Learning of One-hidden-layer Convolutional Neural Networks with
  Overlaps
Improved Learning of One-hidden-layer Convolutional Neural Networks with Overlaps
S. Du
Surbhi Goel
MLT
223
17
0
20 May 2018
Learning One Convolutional Layer with Overlapping Patches
Learning One Convolutional Layer with Overlapping Patches
Surbhi Goel
Adam R. Klivans
Raghu Meka
MLT
148
81
0
07 Feb 2018
To understand deep learning we need to understand kernel learning
To understand deep learning we need to understand kernel learning
M. Belkin
Siyuan Ma
Soumik Mandal
412
437
0
05 Feb 2018
Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of
  Spurious Local Minima
Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima
S. Du
Jason D. Lee
Yuandong Tian
Barnabás Póczós
Aarti Singh
MLT
437
241
0
03 Dec 2017
Learning Neural Networks with Two Nonlinear Layers in Polynomial Time
Learning Neural Networks with Two Nonlinear Layers in Polynomial Time
Surbhi Goel
Adam R. Klivans
373
52
0
18 Sep 2017
Convergence Analysis of Two-layer Neural Networks with ReLU Activation
Convergence Analysis of Two-layer Neural Networks with ReLU Activation
Yuanzhi Li
Yang Yuan
MLT
371
676
0
28 May 2017
1
Page 1 of 1