Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2011.03321
Cited By
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
4 November 2020
Ben Adlam
Jeffrey Pennington
UD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition"
17 / 17 papers shown
Title
auto-fpt: Automating Free Probability Theory Calculations for Machine Learning Theory
Arjun Subramonian
Elvis Dohmatob
24
0
0
14 Apr 2025
Analysis of Overparameterization in Continual Learning under a Linear Model
Daniel Goldfarb
Paul Hand
CLL
39
0
0
11 Feb 2025
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Blake Bordelon
C. Pehlevan
AI4CE
61
1
0
04 Feb 2025
Understanding Optimal Feature Transfer via a Fine-Grained Bias-Variance Analysis
Yufan Li
Subhabrata Sen
Ben Adlam
MLT
43
1
0
18 Apr 2024
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
Behrad Moniri
Donghwan Lee
Hamed Hassani
Edgar Dobriban
MLT
34
19
0
11 Oct 2023
Gibbs-Based Information Criteria and the Over-Parameterized Regime
Haobo Chen
Yuheng Bu
Greg Wornell
21
1
0
08 Jun 2023
Subsample Ridge Ensembles: Equivalences and Generalized Cross-Validation
Jin-Hong Du
Pratik V. Patil
Arun K. Kuchibhotla
16
11
0
25 Apr 2023
Pathologies of Predictive Diversity in Deep Ensembles
Taiga Abe
E. Kelly Buchanan
Geoff Pleiss
John P. Cunningham
UQCV
38
13
0
01 Feb 2023
Demystifying Disagreement-on-the-Line in High Dimensions
Dong-Hwan Lee
Behrad Moniri
Xinmeng Huang
Edgar Dobriban
Hamed Hassani
21
8
0
31 Jan 2023
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
31
4
0
13 Dec 2022
Regularization-wise double descent: Why it occurs and how to eliminate it
Fatih Yilmaz
Reinhard Heckel
25
11
0
03 Jun 2022
Generalization Through The Lens Of Leave-One-Out Error
Gregor Bachmann
Thomas Hofmann
Aurélien Lucchi
44
7
0
07 Mar 2022
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
C. Pehlevan
BDL
MLT
28
26
0
01 Mar 2022
Deep Ensembles Work, But Are They Necessary?
Taiga Abe
E. Kelly Buchanan
Geoff Pleiss
R. Zemel
John P. Cunningham
OOD
UQCV
36
59
0
14 Feb 2022
Understanding the bias-variance tradeoff of Bregman divergences
Ben Adlam
Neha Gupta
Zelda E. Mariet
Jamie Smith
UQCV
UD
15
6
0
08 Feb 2022
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
34
172
0
23 Apr 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
1