ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.11118
  4. Cited By
Reconciling modern machine learning practice and the bias-variance
  trade-off
v1v2 (latest)

Reconciling modern machine learning practice and the bias-variance trade-off

28 December 2018
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
ArXiv (abs)PDFHTML

Papers citing "Reconciling modern machine learning practice and the bias-variance trade-off"

50 / 942 papers shown
Title
Classification and Adversarial examples in an Overparameterized Linear
  Model: A Signal Processing Perspective
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective
Adhyyan Narang
Vidya Muthukumar
A. Sahai
SILMAAML
158
1
0
27 Sep 2021
Is the Number of Trainable Parameters All That Actually Matters?
Is the Number of Trainable Parameters All That Actually Matters?
A. Chatelain
Amine Djeghri
Daniel Hesslow
Julien Launay
Iacopo Poli
145
7
0
24 Sep 2021
A deep neural network for multi-species fish detection using multiple
  acoustic cameras
A deep neural network for multi-species fish detection using multiple acoustic cameras
Garcia Fernandez Guglielmo
François Martignac
M. Nevoux
L. Beaulaton
Thomas Corpetti
98
1
0
22 Sep 2021
Learning PAC-Bayes Priors for Probabilistic Neural Networks
Learning PAC-Bayes Priors for Probabilistic Neural Networks
Maria Perez-Ortiz
Omar Rivasplata
Benjamin Guedj
M. Gleeson
Jingyu Zhang
John Shawe-Taylor
M. Bober
J. Kittler
UQCV
225
33
0
21 Sep 2021
Understanding neural networks with reproducing kernel Banach spaces
Understanding neural networks with reproducing kernel Banach spaces
Francesca Bartolucci
Ernesto De Vito
Lorenzo Rosasco
Stefano Vigogna
246
58
0
20 Sep 2021
A Neural Tangent Kernel Perspective of Infinite Tree Ensembles
A Neural Tangent Kernel Perspective of Infinite Tree EnsemblesInternational Conference on Learning Representations (ICLR), 2021
Ryuichi Kanoh
M. Sugiyama
79
7
0
10 Sep 2021
Learning the hypotheses space from data through a U-curve algorithm
Learning the hypotheses space from data through a U-curve algorithm
Diego Marcondes
Adilson Simonis
Junior Barrera
189
1
0
08 Sep 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
233
77
0
06 Sep 2021
When and how epochwise double descent happens
When and how epochwise double descent happens
Cory Stephenson
Tyler Lee
177
18
0
26 Aug 2021
Shift-Curvature, SGD, and Generalization
Shift-Curvature, SGD, and Generalization
Arwen V. Bradley
C. Gomez-Uribe
Manish Reddy Vuyyuru
275
3
0
21 Aug 2021
Implicit Sparse Regularization: The Impact of Depth and Early Stopping
Implicit Sparse Regularization: The Impact of Depth and Early StoppingNeural Information Processing Systems (NeurIPS), 2021
Jiangyuan Li
Thanh V. Nguyen
Chinmay Hegde
R. K. Wong
199
33
0
12 Aug 2021
Deep Learning Classification of Lake Zooplankton
Deep Learning Classification of Lake ZooplanktonbioRxiv (bioRxiv), 2021
S. Kyathanahally
T. Hardeman
E. Merz
Thea Kozakiewicz
M. Reyes
P. Isles
F. Pomati
Carlo Albert
142
42
0
11 Aug 2021
Interpolation can hurt robust generalization even when there is no noise
Interpolation can hurt robust generalization even when there is no noiseNeural Information Processing Systems (NeurIPS), 2021
Konstantin Donhauser
Alexandru cTifrea
Michael Aerni
Reinhard Heckel
Fanny Yang
226
15
0
05 Aug 2021
Deep multi-task mining Calabi-Yau four-folds
Deep multi-task mining Calabi-Yau four-folds
Harold Erbin
Riccardo Finotello
Robin Schneider
M. Tamaazousti
229
18
0
04 Aug 2021
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite
  Width Neural Networks
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite Width Neural NetworksProceedings of the National Academy of Sciences of the United States of America (PNAS), 2021
Adityanarayanan Radhakrishnan
George Stefanakis
M. Belkin
Caroline Uhler
259
28
0
31 Jul 2021
To Boost or not to Boost: On the Limits of Boosted Neural Networks
To Boost or not to Boost: On the Limits of Boosted Neural Networks
Sai Saketh Rambhatla
Michael J. Jones
Rama Chellappa
88
1
0
28 Jul 2021
The loss landscape of deep linear neural networks: a second-order
  analysis
The loss landscape of deep linear neural networks: a second-order analysis
El Mehdi Achour
Franccois Malgouyres
Sébastien Gerchinovitz
ODL
198
20
0
28 Jul 2021
An explainable two-dimensional single model deep learning approach for
  Alzheimer's disease diagnosis and brain atrophy localization
An explainable two-dimensional single model deep learning approach for Alzheimer's disease diagnosis and brain atrophy localization
Fan Zhang
Bo Pan
Pengfei Shao
Peng Liu
Shuwei Shen
Peng Yao
Ronald X. Xu
56
4
0
28 Jul 2021
On the Role of Optimization in Double Descent: A Least Squares Study
On the Role of Optimization in Double Descent: A Least Squares StudyNeural Information Processing Systems (NeurIPS), 2021
Ilja Kuzborskij
Csaba Szepesvári
Omar Rivasplata
Amal Rannen-Triki
Razvan Pascanu
109
10
0
27 Jul 2021
An Instance-Dependent Simulation Framework for Learning with Label Noise
An Instance-Dependent Simulation Framework for Learning with Label NoiseMachine-mediated learning (ML), 2021
Keren Gu
Xander Masotto
Vandana Bachani
Balaji Lakshminarayanan
Jack Nikodem
Dong Yin
NoLa
295
27
0
23 Jul 2021
Taxonomizing local versus global structure in neural network loss
  landscapes
Taxonomizing local versus global structure in neural network loss landscapesNeural Information Processing Systems (NeurIPS), 2021
Yaoqing Yang
Liam Hodgkinson
Ryan Theisen
Joe Zou
Joseph E. Gonzalez
Kannan Ramchandran
Michael W. Mahoney
355
43
0
23 Jul 2021
Over-Parameterization and Generalization in Audio Classification
Over-Parameterization and Generalization in Audio Classification
Khaled Koutini
Hamid Eghbalzadeh
Florian Henkel
Jan Schluter
Gerhard Widmer
150
2
0
19 Jul 2021
Reasoning-Modulated Representations
Reasoning-Modulated RepresentationsLOG IN (LOG IN), 2021
Petar Velivcković
Matko Bovsnjak
Thomas Kipf
Alexander Lerchner
R. Hadsell
Razvan Pascanu
Charles Blundell
OCLOODSSL
266
16
0
19 Jul 2021
Improved Learning Rates for Stochastic Optimization: Two Theoretical
  Viewpoints
Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints
Shaojie Li
Yong Liu
270
14
0
19 Jul 2021
A Theory of PAC Learnability of Partial Concept Classes
A Theory of PAC Learnability of Partial Concept ClassesIEEE Annual Symposium on Foundations of Computer Science (FOCS), 2021
N. Alon
Steve Hanneke
R. Holzman
Shay Moran
215
60
0
18 Jul 2021
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
420
458
0
14 Jul 2021
The Foes of Neural Network's Data Efficiency Among Unnecessary Input
  Dimensions
The Foes of Neural Network's Data Efficiency Among Unnecessary Input Dimensions
Vanessa D’Amario
S. Srivastava
Tomotake Sasaki
Xavier Boix
AAML
118
2
0
13 Jul 2021
Structured Directional Pruning via Perturbation Orthogonal Projection
Structured Directional Pruning via Perturbation Orthogonal Projection
YinchuanLi
XiaofengLiu
YunfengShao
QingWang
YanhuiGeng
169
2
0
12 Jul 2021
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Boris Hanin
BDL
198
55
0
04 Jul 2021
JUWELS Booster -- A Supercomputer for Large-Scale AI Research
JUWELS Booster -- A Supercomputer for Large-Scale AI Research
Stefan Kesselheim
A. Herten
K. Krajsek
J. Ebert
J. Jitsev
...
A. Strube
Roshni Kamath
Martin G. Schultz
M. Riedel
T. Lippert
GNN
241
21
0
30 Jun 2021
Analytic Insights into Structure and Rank of Neural Network Hessian Maps
Analytic Insights into Structure and Rank of Neural Network Hessian MapsNeural Information Processing Systems (NeurIPS), 2021
Sidak Pal Singh
Gregor Bachmann
Thomas Hofmann
FAtt
213
43
0
30 Jun 2021
Understanding and Improving Early Stopping for Learning with Noisy
  Labels
Understanding and Improving Early Stopping for Learning with Noisy LabelsNeural Information Processing Systems (NeurIPS), 2021
Ying-Long Bai
Erkun Yang
Bo Han
Yanhua Yang
Jiatong Li
Yinian Mao
Gang Niu
Tongliang Liu
NoLa
192
264
0
30 Jun 2021
A Mechanism for Producing Aligned Latent Spaces with Autoencoders
A Mechanism for Producing Aligned Latent Spaces with Autoencoders
Saachi Jain
Adityanarayanan Radhakrishnan
Caroline Uhler
149
11
0
29 Jun 2021
Assessing Generalization of SGD via Disagreement
Assessing Generalization of SGD via DisagreementInternational Conference on Learning Representations (ICLR), 2021
Yiding Jiang
Vaishnavh Nagarajan
Christina Baek
J. Zico Kolter
300
131
0
25 Jun 2021
Jitter: Random Jittering Loss Function
Jitter: Random Jittering Loss FunctionIEEE International Joint Conference on Neural Network (IJCNN), 2021
Zhicheng Cai
Chenglei Peng
S. Du
127
5
0
25 Jun 2021
Bayesian Deep Learning Hyperparameter Search for Robust Function Mapping
  to Polynomials with Noise
Bayesian Deep Learning Hyperparameter Search for Robust Function Mapping to Polynomials with Noise
Nidhin Harilal
Udit Bhatia
A. Ganguly
OOD
72
0
0
23 Jun 2021
Benign Overfitting in Multiclass Classification: All Roads Lead to
  Interpolation
Benign Overfitting in Multiclass Classification: All Roads Lead to InterpolationIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2021
Ke Wang
Vidya Muthukumar
Christos Thrampoulidis
250
54
0
21 Jun 2021
Compression Implies Generalization
Allan Grønlund
M. Hogsgaard
Lior Kamma
Kasper Green Larsen
MLTAI4CE
192
1
0
15 Jun 2021
Training Graph Neural Networks with 1000 Layers
Training Graph Neural Networks with 1000 LayersInternational Conference on Machine Learning (ICML), 2021
Guohao Li
Matthias Muller
Guohao Li
V. Koltun
GNNAI4CE
333
270
0
14 Jun 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and FutureAI Open (AO), 2021
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFinMQAI4MH
371
975
0
14 Jun 2021
Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data
Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data
Beau Coker
Weiwei Pan
Finale Doshi-Velez
BDL
103
10
0
13 Jun 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian
  Process Perspective
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process PerspectiveNeural Information Processing Systems (NeurIPS), 2021
Geoff Pleiss
John P. Cunningham
213
30
0
11 Jun 2021
Neural Symbolic Regression that Scales
Neural Symbolic Regression that ScalesInternational Conference on Machine Learning (ICML), 2021
Luca Biggio
Tommaso Bendinelli
Alexander Neitz
Aurelien Lucchi
Giambattista Parascandolo
196
213
0
11 Jun 2021
Curiously Effective Features for Image Quality Prediction
Curiously Effective Features for Image Quality PredictionInternational Conference on Information Photonics (ICIP), 2021
S. Becker
Thomas Wiegand
S. Bosse
133
4
0
10 Jun 2021
Probing transfer learning with a model of synthetic correlated datasets
Probing transfer learning with a model of synthetic correlated datasets
Federica Gerace
Luca Saglietti
Stefano Sarao Mannelli
Andrew M. Saxe
Lenka Zdeborová
OOD
165
37
0
09 Jun 2021
Nonasymptotic theory for two-layer neural networks: Beyond the
  bias-variance trade-off
Nonasymptotic theory for two-layer neural networks: Beyond the bias-variance trade-off
Huiyuan Wang
Wei Lin
MLT
149
5
0
09 Jun 2021
Ghosts in Neural Networks: Existence, Structure and Role of
  Infinite-Dimensional Null Space
Ghosts in Neural Networks: Existence, Structure and Role of Infinite-Dimensional Null Space
Sho Sonoda
Isao Ishikawa
Masahiro Ikeda
BDL
103
9
0
09 Jun 2021
The Randomness of Input Data Spaces is an A Priori Predictor for
  Generalization
The Randomness of Input Data Spaces is an A Priori Predictor for GeneralizationDeutsche Jahrestagung für Künstliche Intelligenz (KI), 2021
Martin Briesch
Dominik Sobania
Franz Rothlauf
UQCV
89
2
0
08 Jun 2021
Double Descent and Other Interpolation Phenomena in GANs
Double Descent and Other Interpolation Phenomena in GANs
Lorenzo Luzi
Yehuda Dar
Richard Baraniuk
208
5
0
07 Jun 2021
Learning Gaussian Mixtures with Generalised Linear Models: Precise
  Asymptotics in High-dimensions
Learning Gaussian Mixtures with Generalised Linear Models: Precise Asymptotics in High-dimensionsNeural Information Processing Systems (NeurIPS), 2021
Bruno Loureiro
G. Sicuro
Cédric Gerbelot
Alessandro Pacco
Florent Krzakala
Lenka Zdeborová
220
67
0
07 Jun 2021
Previous
123...121314...171819
Next