ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.11118
  4. Cited By
Reconciling modern machine learning practice and the bias-variance
  trade-off
v1v2 (latest)

Reconciling modern machine learning practice and the bias-variance trade-off

28 December 2018
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
ArXiv (abs)PDFHTML

Papers citing "Reconciling modern machine learning practice and the bias-variance trade-off"

50 / 942 papers shown
Title
On the Inherent Regularization Effects of Noise Injection During
  Training
On the Inherent Regularization Effects of Noise Injection During TrainingInternational Conference on Machine Learning (ICML), 2021
Oussama Dhifallah
Yue M. Lu
125
33
0
15 Feb 2021
Double-descent curves in neural networks: a new perspective using
  Gaussian processes
Double-descent curves in neural networks: a new perspective using Gaussian processesAAAI Conference on Artificial Intelligence (AAAI), 2021
Ouns El Harzli
Bernardo Cuenca Grau
Guillermo Valle Pérez
A. Louis
366
6
0
14 Feb 2021
Distilling Double Descent
Distilling Double Descent
Andrew Cotter
A. Menon
Harikrishna Narasimhan
A. S. Rawat
Sashank J. Reddi
Yichen Zhou
139
7
0
13 Feb 2021
Learning Curve Theory
Learning Curve Theory
Marcus Hutter
333
77
0
08 Feb 2021
Last iterate convergence of SGD for Least-Squares in the Interpolation
  regime
Last iterate convergence of SGD for Least-Squares in the Interpolation regimeNeural Information Processing Systems (NeurIPS), 2021
Aditya Varre
Loucas Pillaud-Vivien
Nicolas Flammarion
164
43
0
05 Feb 2021
A Deeper Look into Convolutions via Eigenvalue-based Pruning
A Deeper Look into Convolutions via Eigenvalue-based Pruning
Ilke Çugu
Emre Akbas
FAtt
115
2
0
04 Feb 2021
Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance
  Tradeoff Perspective
Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff PerspectiveInternational Conference on Learning Representations (ICLR), 2021
Helong Zhou
Liangchen Song
Jiajie Chen
Ye Zhou
Guoli Wang
Junsong Yuan
Qian Zhang
339
199
0
01 Feb 2021
Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse
  in Imbalanced Training
Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced TrainingProceedings of the National Academy of Sciences of the United States of America (PNAS), 2021
Cong Fang
Hangfeng He
Qi Long
Weijie J. Su
FAtt
404
202
0
29 Jan 2021
A Statistician Teaches Deep Learning
A Statistician Teaches Deep LearningJournal of Statistical Theory and Practice (JSTP), 2021
G. Babu
David L. Banks
Hyunsoo Cho
David Han
Hailin Sang
Shouyi Wang
207
2
0
29 Jan 2021
Generalization error of random features and kernel methods:
  hypercontractivity and kernel matrix concentration
Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentrationApplied and Computational Harmonic Analysis (ACHA), 2021
Song Mei
Theodor Misiakiewicz
Andrea Montanari
210
123
0
26 Jan 2021
Deep Learning Generalization and the Convex Hull of Training Sets
Deep Learning Generalization and the Convex Hull of Training Sets
Roozbeh Yousefzadeh
124
20
0
25 Jan 2021
Linear Regression with Distributed Learning: A Generalization Error
  Perspective
Linear Regression with Distributed Learning: A Generalization Error PerspectiveIEEE Transactions on Signal Processing (IEEE TSP), 2021
Martin Hellkvist
Ayça Özçelikkale
Anders Ahlén
FedML
251
10
0
22 Jan 2021
Self-Adaptive Training: Bridging Supervised and Self-Supervised Learning
Self-Adaptive Training: Bridging Supervised and Self-Supervised LearningIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Lang Huang
Chaoning Zhang
Hongyang R. Zhang
SSL
248
30
0
21 Jan 2021
Implicit Bias of Linear RNNs
Implicit Bias of Linear RNNsInternational Conference on Machine Learning (ICML), 2021
M Motavali Emami
Mojtaba Sahraee-Ardakan
Parthe Pandit
S. Rangan
A. Fletcher
129
13
0
19 Jan 2021
Phases of learning dynamics in artificial neural networks: with or
  without mislabeled data
Phases of learning dynamics in artificial neural networks: with or without mislabeled data
Yu Feng
Y. Tu
100
2
0
16 Jan 2021
Fitting very flexible models: Linear regression with large numbers of
  parameters
Fitting very flexible models: Linear regression with large numbers of parametersPublications of the Astronomical Society of the Pacific (PASP), 2021
D. Hogg
Soledad Villar
168
7
0
15 Jan 2021
The data synergy effects of time-series deep learning models in
  hydrology
The data synergy effects of time-series deep learning models in hydrologyWater Resources Research (WRR), 2021
K. Fang
Daniel Kifer
K. Lawson
D. Feng
Chaopeng Shen
AI4CE
285
103
0
06 Jan 2021
A unifying approach on bias and variance analysis for classification
A unifying approach on bias and variance analysis for classification
Cemre Zor
T. Windeatt
23
0
0
05 Jan 2021
Perspective: A Phase Diagram for Deep Learning unifying Jamming, Feature
  Learning and Lazy Training
Perspective: A Phase Diagram for Deep Learning unifying Jamming, Feature Learning and Lazy Training
Mario Geiger
Leonardo Petrini
Matthieu Wyart
DRL
157
11
0
30 Dec 2020
Analysis of the Scalability of a Deep-Learning Network for Steganography
  "Into the Wild"
Analysis of the Scalability of a Deep-Learning Network for Steganography "Into the Wild"
Hugo Ruiz
Marc Chaumont
Mehdi Yedroudj
A. Amara
Frédéric Comby
Gérard Subsol
129
9
0
29 Dec 2020
Data augmentation and image understanding
Data augmentation and image understanding
Alex Hernandez-Garcia
134
6
0
28 Dec 2020
Applying Deutsch's concept of good explanations to artificial
  intelligence and neuroscience -- an initial exploration
Applying Deutsch's concept of good explanations to artificial intelligence and neuroscience -- an initial explorationCognitive Systems Research (CSR), 2020
Daniel C. Elton
191
4
0
16 Dec 2020
Provable Benefits of Overparameterization in Model Compression: From
  Double Descent to Pruning Neural Networks
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural NetworksAAAI Conference on Artificial Intelligence (AAAI), 2020
Xiangyu Chang
Yingcong Li
Samet Oymak
Christos Thrampoulidis
248
58
0
16 Dec 2020
A case for new neural network smoothness constraints
A case for new neural network smoothness constraints
Mihaela Rosca
T. Weber
Arthur Gretton
S. Mohamed
AAML
291
59
0
14 Dec 2020
Avoiding The Double Descent Phenomenon of Random Feature Models Using
  Hybrid Regularization
Avoiding The Double Descent Phenomenon of Random Feature Models Using Hybrid Regularization
Kelvin K. Kan
J. Nagy
Lars Ruthotto
AI4CE
113
6
0
11 Dec 2020
Beyond Occam's Razor in System Identification: Double-Descent when
  Modeling Dynamics
Beyond Occam's Razor in System Identification: Double-Descent when Modeling DynamicsIFAC-PapersOnLine (IFAC-PapersOnLine), 2020
Antônio H. Ribeiro
J. Hendriks
A. Wills
Thomas B. Schon
95
8
0
11 Dec 2020
Generalization bounds for deep learning
Generalization bounds for deep learning
Guillermo Valle Pérez
A. Louis
BDL
231
48
0
07 Dec 2020
Statistical Mechanics of Deep Linear Neural Networks: The
  Back-Propagating Kernel Renormalization
Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Kernel Renormalization
Qianyi Li
H. Sompolinsky
374
84
0
07 Dec 2020
Model Compression Using Optimal Transport
Model Compression Using Optimal Transport
Suhas Lohit
Michael J. Jones
201
9
0
07 Dec 2020
Understanding Interpretability by generalized distillation in Supervised
  Classification
Understanding Interpretability by generalized distillation in Supervised Classification
Adit Agarwal
Dr. K.K. Shukla
Arjan Kuijper
Anirban Mukhopadhyay
FaMLFAtt
128
0
0
05 Dec 2020
Rethinking supervised learning: insights from biological learning and
  from calling it by its name
Rethinking supervised learning: insights from biological learning and from calling it by its name
Alex Hernandez-Garcia
SSL
147
0
0
04 Dec 2020
On the robustness of minimum norm interpolators and regularized
  empirical risk minimizers
On the robustness of minimum norm interpolators and regularized empirical risk minimizersAnnals of Statistics (Ann. Stat.), 2020
Geoffrey Chinot
Matthias Löffler
Sara van de Geer
284
22
0
01 Dec 2020
Scaling Down Deep Learning with MNIST-1D
Scaling Down Deep Learning with MNIST-1DInternational Conference on Machine Learning (ICML), 2020
S. Greydanus
Dmitry Kobak
252
24
0
29 Nov 2020
Dimensionality reduction, regularization, and generalization in
  overparameterized regressions
Dimensionality reduction, regularization, and generalization in overparameterized regressionsSIAM Journal on Mathematics of Data Science (SIMODS), 2020
Ningyuan Huang
D. Hogg
Soledad Villar
204
18
0
23 Nov 2020
Deep Empirical Risk Minimization in finance: looking into the future
Deep Empirical Risk Minimization in finance: looking into the futureMathematical Finance (Math. Finance), 2020
A. M. Reppen
H. Soner
202
21
0
18 Nov 2020
Topological properties of basins of attraction and expressiveness of
  width bounded neural networks
Topological properties of basins of attraction and expressiveness of width bounded neural networks
H. Beise
S. Cruz
277
0
0
10 Nov 2020
Unwrapping The Black Box of Deep ReLU Networks: Interpretability,
  Diagnostics, and Simplification
Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification
Agus Sudjianto
William Knauth
Rahul Singh
Zebin Yang
Aijun Zhang
FAtt
200
50
0
08 Nov 2020
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
223
102
0
04 Nov 2020
Instance based Generalization in Reinforcement Learning
Instance based Generalization in Reinforcement LearningNeural Information Processing Systems (NeurIPS), 2020
Martín Bertrán
Natalia Martínez
Mariano Phielipp
Guillermo Sapiro
OffRL
222
19
0
02 Nov 2020
The Performance Analysis of Generalized Margin Maximizer (GMM) on
  Separable Data
The Performance Analysis of Generalized Margin Maximizer (GMM) on Separable DataInternational Conference on Machine Learning (ICML), 2020
Fariborz Salehi
Ehsan Abbasi
B. Hassibi
131
19
0
29 Oct 2020
A Bayesian Perspective on Training Speed and Model Selection
A Bayesian Perspective on Training Speed and Model SelectionNeural Information Processing Systems (NeurIPS), 2020
Clare Lyle
Lisa Schut
Binxin Ru
Y. Gal
Mark van der Wilk
180
24
0
27 Oct 2020
Are wider nets better given the same number of parameters?
Are wider nets better given the same number of parameters?International Conference on Learning Representations (ICLR), 2020
A. Golubeva
Behnam Neyshabur
Guy Gur-Ari
202
46
0
27 Oct 2020
Memorizing without overfitting: Bias, variance, and interpolation in
  over-parameterized models
Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized modelsPhysical Review Research (PRResearch), 2020
J. Rocks
Pankaj Mehta
381
54
0
26 Oct 2020
Provable Memorization via Deep Neural Networks using Sub-linear
  Parameters
Provable Memorization via Deep Neural Networks using Sub-linear ParametersAnnual Conference Computational Learning Theory (COLT), 2020
Sejun Park
Jaeho Lee
Chulhee Yun
Jinwoo Shin
FedMLMDE
165
43
0
26 Oct 2020
Unified Gradient Reweighting for Model Biasing with Applications to
  Source Separation
Unified Gradient Reweighting for Model Biasing with Applications to Source SeparationIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
Efthymios Tzinis
Dimitrios Bralios
Paris Smaragdis
240
1
0
25 Oct 2020
Train simultaneously, generalize better: Stability of gradient-based
  minimax learners
Train simultaneously, generalize better: Stability of gradient-based minimax learnersInternational Conference on Machine Learning (ICML), 2020
Farzan Farnia
Asuman Ozdaglar
163
53
0
23 Oct 2020
Fast and Smooth Interpolation on Wasserstein Space
Fast and Smooth Interpolation on Wasserstein Space
Sinho Chewi
Julien Clancy
Thibaut Le Gouic
Philippe Rigollet
George Stepaniants
Austin J. Stromme
149
32
0
22 Oct 2020
Precise High-Dimensional Asymptotics for Quantifying Heterogeneous Transfers
Precise High-Dimensional Asymptotics for Quantifying Heterogeneous Transfers
Fan Yang
Hongyang R. Zhang
Sen Wu
Christopher Ré
Weijie J. Su
454
20
0
22 Oct 2020
Precise Statistical Analysis of Classification Accuracies for
  Adversarial Training
Precise Statistical Analysis of Classification Accuracies for Adversarial Training
Adel Javanmard
Mahdi Soltanolkotabi
AAML
357
66
0
21 Oct 2020
Increasing Depth Leads to U-Shaped Test Risk in Over-parameterized
  Convolutional Networks
Increasing Depth Leads to U-Shaped Test Risk in Over-parameterized Convolutional Networks
Eshaan Nichani
Adityanarayanan Radhakrishnan
Caroline Uhler
263
9
0
19 Oct 2020
Previous
123...141516171819
Next