Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.09665
Cited By
A jamming transition from under- to over-parametrization affects loss landscape and generalization
22 October 2018
S. Spigler
Mario Geiger
Stéphane dÁscoli
Levent Sagun
Giulio Biroli
M. Wyart
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A jamming transition from under- to over-parametrization affects loss landscape and generalization"
35 / 35 papers shown
Title
The Double Descent Behavior in Two Layer Neural Network for Binary Classification
Chathurika S Abeykoon
A. Beknazaryan
Hailin Sang
51
1
0
27 Apr 2025
auto-fpt: Automating Free Probability Theory Calculations for Machine Learning Theory
Arjun Subramonian
Elvis Dohmatob
24
0
0
14 Apr 2025
Breaking Neural Network Scaling Laws with Modularity
Akhilan Boopathy
Sunshine Jiang
William Yue
Jaedong Hwang
Abhiram Iyer
Ila Fiete
OOD
39
2
0
09 Sep 2024
Quantifying lottery tickets under label noise: accuracy, calibration, and complexity
V. Arora
Daniele Irto
Sebastian Goldt
G. Sanguinetti
36
2
0
21 Jun 2023
Tradeoff of generalization error in unsupervised learning
Gilhan Kim
Ho-Jun Lee
Junghyo Jo
Yongjoo Baek
13
0
0
10 Mar 2023
Can we avoid Double Descent in Deep Neural Networks?
Victor Quétu
Enzo Tartaglione
AI4CE
20
3
0
26 Feb 2023
A Simple Algorithm For Scaling Up Kernel Methods
Tengyu Xu
Bryan T. Kelly
Semyon Malamud
11
0
0
26 Jan 2023
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
34
4
0
13 Dec 2022
A Survey of Learning Curves with Bad Behavior: or How More Data Need Not Lead to Better Performance
Marco Loog
T. Viering
21
1
0
25 Nov 2022
Understanding the double descent curve in Machine Learning
Luis Sa-Couto
J. M. Ramos
Miguel Almeida
Andreas Wichert
22
1
0
18 Nov 2022
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
18
6
0
27 Aug 2022
Interplay between depth of neural networks and locality of target functions
Takashi Mori
Masakuni Ueda
17
0
0
28 Jan 2022
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Mohammad Pezeshki
Amartya Mitra
Yoshua Bengio
Guillaume Lajoie
61
25
0
06 Dec 2021
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
36
16
0
05 Dec 2021
Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks
A. Shevchenko
Vyacheslav Kungurtsev
Marco Mondelli
MLT
36
13
0
03 Nov 2021
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
32
13
0
22 Oct 2021
Learning through atypical "phase transitions" in overparameterized neural networks
Carlo Baldassi
Clarissa Lauditi
Enrico M. Malatesta
R. Pacelli
Gabriele Perugini
R. Zecchina
26
26
0
01 Oct 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
29
71
0
06 Sep 2021
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero
Francesco Cagnetta
M. Wyart
24
31
0
16 Jun 2021
Relative stability toward diffeomorphisms indicates performance in deep nets
Leonardo Petrini
Alessandro Favero
Mario Geiger
M. Wyart
OOD
31
15
0
06 May 2021
The Shape of Learning Curves: a Review
T. Viering
Marco Loog
18
122
0
19 Mar 2021
Learning curves of generic features maps for realistic datasets with a teacher-student model
Bruno Loureiro
Cédric Gerbelot
Hugo Cui
Sebastian Goldt
Florent Krzakala
M. Mézard
Lenka Zdeborová
28
135
0
16 Feb 2021
Analysis of the Scalability of a Deep-Learning Network for Steganography "Into the Wild"
Hugo Ruiz
Marc Chaumont
Mehdi Yedroudj
A. Amara
Frédéric Comby
Gérard Subsol
18
9
0
29 Dec 2020
Geometric compression of invariant manifolds in neural nets
J. Paccolat
Leonardo Petrini
Mario Geiger
Kevin Tyloo
M. Wyart
MLT
52
34
0
22 Jul 2020
Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks
Yehuda Dar
Richard G. Baraniuk
23
19
0
12 Jun 2020
Generalisation error in learning with random features and the hidden manifold model
Federica Gerace
Bruno Loureiro
Florent Krzakala
M. Mézard
Lenka Zdeborová
25
165
0
21 Feb 2020
Implicit Regularization of Random Feature Models
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
18
82
0
19 Feb 2020
A Model of Double Descent for High-dimensional Binary Linear Classification
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
29
143
0
13 Nov 2019
Finite Depth and Width Corrections to the Neural Tangent Kernel
Boris Hanin
Mihai Nica
MDE
19
150
0
13 Sep 2019
Two models of double descent for weak features
M. Belkin
Daniel J. Hsu
Ji Xu
14
375
0
18 Mar 2019
Scaling description of generalization with number of parameters in deep learning
Mario Geiger
Arthur Jacot
S. Spigler
Franck Gabriel
Levent Sagun
Stéphane dÁscoli
Giulio Biroli
Clément Hongler
M. Wyart
39
194
0
06 Jan 2019
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
25
1,610
0
28 Dec 2018
Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization
D. Kobak
Jonathan Lomond
Benoit Sanchez
16
89
0
28 May 2018
High-dimensional dynamics of generalization error in neural networks
Madhu S. Advani
Andrew M. Saxe
AI4CE
45
464
0
10 Oct 2017
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
1