ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.08591
  4. Cited By
A Modern Take on the Bias-Variance Tradeoff in Neural Networks

A Modern Take on the Bias-Variance Tradeoff in Neural Networks

19 October 2018
Brady Neal
Sarthak Mittal
A. Baratin
Vinayak Tantia
Matthew Scicluna
Simon Lacoste-Julien
Ioannis Mitliagkas
ArXivPDFHTML

Papers citing "A Modern Take on the Bias-Variance Tradeoff in Neural Networks"

29 / 29 papers shown
Title
Gabor-Enhanced Physics-Informed Neural Networks for Fast Simulations of Acoustic Wavefields
Gabor-Enhanced Physics-Informed Neural Networks for Fast Simulations of Acoustic Wavefields
Mohammad Mahdi Abedi
David Pardo
Tariq Alkhalifah
49
1
0
24 Feb 2025
Understanding Model Ensemble in Transferable Adversarial Attack
Understanding Model Ensemble in Transferable Adversarial Attack
Wei Yao
Zeliang Zhang
Huayi Tang
Yong Liu
33
2
0
09 Oct 2024
Breaking Neural Network Scaling Laws with Modularity
Breaking Neural Network Scaling Laws with Modularity
Akhilan Boopathy
Sunshine Jiang
William Yue
Jaedong Hwang
Abhiram Iyer
Ila Fiete
OOD
39
2
0
09 Sep 2024
Rethinking Semi-Supervised Imbalanced Node Classification from Bias-Variance Decomposition
Rethinking Semi-Supervised Imbalanced Node Classification from Bias-Variance Decomposition
Divin Yan
Gengchen Wei
Chen Yang
Shengzhong Zhang
Zengfeng Huang
AI4CE
36
11
0
28 Oct 2023
Training-Free Neural Active Learning with Initialization-Robustness
  Guarantees
Training-Free Neural Active Learning with Initialization-Robustness Guarantees
Apivich Hemachandra
Zhongxiang Dai
Jasraj Singh
See-Kiong Ng
K. H. Low
AAML
30
6
0
07 Jun 2023
Lower bounds for the trade-off between bias and mean absolute deviation
Lower bounds for the trade-off between bias and mean absolute deviation
A. Derumigny
Johannes Schmidt-Hieber
25
0
0
21 Mar 2023
Pathologies of Predictive Diversity in Deep Ensembles
Pathologies of Predictive Diversity in Deep Ensembles
Taiga Abe
E. Kelly Buchanan
Geoff Pleiss
John P. Cunningham
UQCV
38
13
0
01 Feb 2023
Task Discovery: Finding the Tasks that Neural Networks Generalize on
Task Discovery: Finding the Tasks that Neural Networks Generalize on
Andrei Atanov
Andrei Filatov
Teresa Yeo
Ajay Sohmshetty
Amir Zamir
OOD
40
10
0
01 Dec 2022
Understanding the double descent curve in Machine Learning
Understanding the double descent curve in Machine Learning
Luis Sa-Couto
J. M. Ramos
Miguel Almeida
Andreas Wichert
14
1
0
18 Nov 2022
Packed-Ensembles for Efficient Uncertainty Estimation
Packed-Ensembles for Efficient Uncertainty Estimation
Olivier Laurent
Adrien Lafage
Enzo Tartaglione
Geoffrey Daniel
Jean-Marc Martinez
Andrei Bursuc
Gianni Franchi
OODD
41
32
0
17 Oct 2022
The Dynamic of Consensus in Deep Networks and the Identification of
  Noisy Labels
The Dynamic of Consensus in Deep Networks and the Identification of Noisy Labels
Daniel Shwartz
Uri Stern
D. Weinshall
NoLa
33
2
0
02 Oct 2022
Deep Double Descent via Smooth Interpolation
Deep Double Descent via Smooth Interpolation
Matteo Gamba
Erik Englesson
Marten Bjorkman
Hossein Azizpour
56
10
0
21 Sep 2022
Membership Inference Attacks and Generalization: A Causal Perspective
Membership Inference Attacks and Generalization: A Causal Perspective
Teodora Baluta
Shiqi Shen
S. Hitarth
Shruti Tople
Prateek Saxena
OOD
MIACV
40
18
0
18 Sep 2022
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully
  Connected Neural Networks
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully Connected Neural Networks
Charles Edison Tripp
J. Perr-Sauer
L. Hayne
M. Lunacek
Jamil Gafur
AI4CE
21
0
0
25 Jul 2022
Regularization-wise double descent: Why it occurs and how to eliminate
  it
Regularization-wise double descent: Why it occurs and how to eliminate it
Fatih Yilmaz
Reinhard Heckel
25
11
0
03 Jun 2022
Understanding the bias-variance tradeoff of Bregman divergences
Understanding the bias-variance tradeoff of Bregman divergences
Ben Adlam
Neha Gupta
Zelda E. Mariet
Jamie Smith
UQCV
UD
15
6
0
08 Feb 2022
An Analysis on Ensemble Learning optimized Medical Image Classification
  with Deep Convolutional Neural Networks
An Analysis on Ensemble Learning optimized Medical Image Classification with Deep Convolutional Neural Networks
Dominik Muller
Iñaki Soto Rey
Frank Kramer
16
56
0
27 Jan 2022
More layers! End-to-end regression and uncertainty on tabular data with
  deep learning
More layers! End-to-end regression and uncertainty on tabular data with deep learning
Ivan Bondarenko
OOD
LMTD
UQCV
17
4
0
07 Dec 2021
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Mohammad Pezeshki
Amartya Mitra
Yoshua Bengio
Guillaume Lajoie
58
25
0
06 Dec 2021
From Stars to Subgraphs: Uplifting Any GNN with Local Structure
  Awareness
From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness
Lingxiao Zhao
Wei Jin
L. Akoglu
Neil Shah
GNN
22
160
0
07 Oct 2021
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
24
92
0
04 Nov 2020
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
18
61
0
03 Aug 2020
A Survey of End-to-End Driving: Architectures and Training Methods
A Survey of End-to-End Driving: Architectures and Training Methods
Ardi Tampuu
Maksym Semikin
Naveed Muhammad
D. Fishman
Tambet Matiisen
3DV
18
228
0
13 Mar 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
90
152
0
02 Mar 2020
Generalisation error in learning with random features and the hidden
  manifold model
Generalisation error in learning with random features and the hidden manifold model
Federica Gerace
Bruno Loureiro
Florent Krzakala
M. Mézard
Lenka Zdeborová
25
165
0
21 Feb 2020
Implicit Regularization of Random Feature Models
Implicit Regularization of Random Feature Models
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
18
82
0
19 Feb 2020
Scaling description of generalization with number of parameters in deep
  learning
Scaling description of generalization with number of parameters in deep learning
Mario Geiger
Arthur Jacot
S. Spigler
Franck Gabriel
Levent Sagun
Stéphane dÁscoli
Giulio Biroli
Clément Hongler
M. Wyart
36
194
0
06 Jan 2019
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
220
348
0
14 Jun 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
278
2,888
0
15 Sep 2016
1