ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.11837
  4. Cited By
Mildly Overparametrized Neural Nets can Memorize Training Data
  Efficiently

Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently

26 September 2019
Rong Ge
Runzhe Wang
Haoyu Zhao
    TDI
ArXiv (abs)PDFHTML

Papers citing "Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently"

18 / 18 papers shown
Title
Fixed width treelike neural networks capacity analysis -- generic
  activations
Fixed width treelike neural networks capacity analysis -- generic activations
M. Stojnic
130
3
0
08 Feb 2024
\emph{Lifted} RDT based capacity analysis of the 1-hidden layer treelike
  \emph{sign} perceptrons neural networks
\emph{Lifted} RDT based capacity analysis of the 1-hidden layer treelike \emph{sign} perceptrons neural networks
M. Stojnic
143
2
0
13 Dec 2023
Capacity of the treelike sign perceptrons neural networks with one
  hidden layer -- RDT based upper bounds
Capacity of the treelike sign perceptrons neural networks with one hidden layer -- RDT based upper bounds
M. Stojnic
179
4
0
13 Dec 2023
Memorization with neural nets: going beyond the worst case
Memorization with neural nets: going beyond the worst case
S. Dirksen
Patrick Finke
Martin Genzel
212
1
0
30 Sep 2023
Memorization and Optimization in Deep Neural Networks with Minimum
  Over-parameterization
Memorization and Optimization in Deep Neural Networks with Minimum Over-parameterizationNeural Information Processing Systems (NeurIPS), 2022
Simone Bombari
Mohammad Hossein Amani
Marco Mondelli
225
33
0
20 May 2022
Self-Regularity of Non-Negative Output Weights for Overparameterized
  Two-Layer Neural Networks
Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural NetworksIEEE Transactions on Signal Processing (IEEE TSP), 2021
D. Gamarnik
Eren C. Kizildaug
Ilias Zadik
132
1
0
02 Mar 2021
When Are Solutions Connected in Deep Networks?
When Are Solutions Connected in Deep Networks?Neural Information Processing Systems (NeurIPS), 2021
Quynh N. Nguyen
Pierre Bréchet
Marco Mondelli
321
10
0
18 Feb 2021
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for
  Deep ReLU Networks
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU NetworksInternational Conference on Machine Learning (ICML), 2020
Quynh N. Nguyen
Marco Mondelli
Guido Montúfar
508
94
0
21 Dec 2020
When Do Curricula Work?
When Do Curricula Work?International Conference on Learning Representations (ICLR), 2020
Xiaoxia Wu
Ethan Dyer
Behnam Neyshabur
320
126
0
05 Dec 2020
Understanding How Over-Parametrization Leads to Acceleration: A case of
  learning a single teacher neuron
Understanding How Over-Parametrization Leads to Acceleration: A case of learning a single teacher neuronAsian Conference on Machine Learning (ACML), 2020
Jun-Kun Wang
Jacob D. Abernethy
201
1
0
04 Oct 2020
Hardness of Learning Neural Networks with Natural Weights
Hardness of Learning Neural Networks with Natural Weights
Amit Daniely
Gal Vardi
184
21
0
05 Jun 2020
Memorizing Gaussians with no over-parameterizaion via gradient decent on
  neural networks
Memorizing Gaussians with no over-parameterizaion via gradient decent on neural networks
Amit Daniely
VLMMLT
104
14
0
28 Mar 2020
Global Convergence of Deep Networks with One Wide Layer Followed by
  Pyramidal Topology
Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal TopologyNeural Information Processing Systems (NeurIPS), 2020
Quynh N. Nguyen
Marco Mondelli
ODLAI4CE
259
78
0
18 Feb 2020
Learning Parities with Neural Networks
Learning Parities with Neural NetworksNeural Information Processing Systems (NeurIPS), 2020
Amit Daniely
Eran Malach
221
88
0
18 Feb 2020
Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent
Training Two-Layer ReLU Networks with Gradient Descent is InconsistentJournal of machine learning research (JMLR), 2020
David Holzmüller
Ingo Steinwart
MLT
182
8
0
12 Feb 2020
Memory capacity of neural networks with threshold and ReLU activations
Memory capacity of neural networks with threshold and ReLU activations
Roman Vershynin
127
21
0
20 Jan 2020
Neural Networks Learning and Memorization with (almost) no
  Over-Parameterization
Neural Networks Learning and Memorization with (almost) no Over-ParameterizationNeural Information Processing Systems (NeurIPS), 2019
Amit Daniely
144
34
0
22 Nov 2019
Active Subspace of Neural Networks: Structural Analysis and Universal
  Attacks
Active Subspace of Neural Networks: Structural Analysis and Universal AttacksSIAM Journal on Mathematics of Data Science (SIMODS), 2019
Chunfeng Cui
Kaiqi Zhang
Talgat Daulbaev
Julia Gusak
Ivan Oseledets
Zheng Zhang
AAML
151
27
0
29 Oct 2019
1