ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.11118
  4. Cited By
Reconciling modern machine learning practice and the bias-variance
  trade-off

Reconciling modern machine learning practice and the bias-variance trade-off

28 December 2018
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
ArXivPDFHTML

Papers citing "Reconciling modern machine learning practice and the bias-variance trade-off"

50 / 273 papers shown
Title
What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled
  Safety Critical Systems
What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled Safety Critical Systems
Saddek Bensalem
Chih-Hong Cheng
Wei Huang
Xiaowei Huang
Changshun Wu
Xingyu Zhao
AAML
24
6
0
20 Jul 2023
The Interpolating Information Criterion for Overparameterized Models
The Interpolating Information Criterion for Overparameterized Models
Liam Hodgkinson
Christopher van der Heide
Roberto Salomone
Fred Roosta
Michael W. Mahoney
20
7
0
15 Jul 2023
Quantifying lottery tickets under label noise: accuracy, calibration,
  and complexity
Quantifying lottery tickets under label noise: accuracy, calibration, and complexity
V. Arora
Daniele Irto
Sebastian Goldt
G. Sanguinetti
36
2
0
21 Jun 2023
Progressive Class-Wise Attention (PCA) Approach for Diagnosing Skin
  Lesions
Progressive Class-Wise Attention (PCA) Approach for Diagnosing Skin Lesions
Asim Naveed
Syed S. Naqvi
Tariq Mahmood Khan
Imran Razzak
31
1
0
11 Jun 2023
Gibbs-Based Information Criteria and the Over-Parameterized Regime
Gibbs-Based Information Criteria and the Over-Parameterized Regime
Haobo Chen
Yuheng Bu
Greg Wornell
27
1
0
08 Jun 2023
Maximally Machine-Learnable Portfolios
Maximally Machine-Learnable Portfolios
Philippe Goulet Coulombe
Maximilian Göbel
21
3
0
08 Jun 2023
Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
  Capability
Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability
Jianing Zhu
Hengzhuang Li
Jiangchao Yao
Tongliang Liu
Jianliang Xu
Bo Han
OODD
40
12
0
06 Jun 2023
Unraveling Projection Heads in Contrastive Learning: Insights from
  Expansion and Shrinkage
Unraveling Projection Heads in Contrastive Learning: Insights from Expansion and Shrinkage
Yu Gui
Cong Ma
Yiqiao Zhong
22
6
0
06 Jun 2023
Generalized equivalences between subsampling and ridge regularization
Generalized equivalences between subsampling and ridge regularization
Pratik V. Patil
Jin-Hong Du
29
5
0
29 May 2023
Optimization's Neglected Normative Commitments
Optimization's Neglected Normative Commitments
Benjamin Laufer
T. Gilbert
Helen Nissenbaum
OffRL
21
4
0
27 May 2023
Double Descent of Discrepancy: A Task-, Data-, and Model-Agnostic
  Phenomenon
Double Descent of Discrepancy: A Task-, Data-, and Model-Agnostic Phenomenon
Yi-Xiao Luo
Bin Dong
26
0
0
25 May 2023
When are ensembles really effective?
When are ensembles really effective?
Ryan Theisen
Hyunsuk Kim
Yaoqing Yang
Liam Hodgkinson
Michael W. Mahoney
FedML
UQCV
35
15
0
21 May 2023
Towards understanding neural collapse in supervised contrastive learning
  with the information bottleneck method
Towards understanding neural collapse in supervised contrastive learning with the information bottleneck method
Siwei Wang
S. Palmer
24
2
0
19 May 2023
Target-Side Augmentation for Document-Level Machine Translation
Target-Side Augmentation for Document-Level Machine Translation
Guangsheng Bao
Zhiyang Teng
Yue Zhang
26
10
0
08 May 2023
Is deep learning a useful tool for the pure mathematician?
Is deep learning a useful tool for the pure mathematician?
G. Williamson
FedML
21
13
0
25 Apr 2023
Learning Trajectories are Generalization Indicators
Learning Trajectories are Generalization Indicators
Jingwen Fu
Zhizheng Zhang
Dacheng Yin
Yan Lu
Nanning Zheng
AI4CE
28
3
0
25 Apr 2023
Prediction-Oriented Bayesian Active Learning
Prediction-Oriented Bayesian Active Learning
Freddie Bickford-Smith
Andreas Kirsch
Sebastian Farquhar
Y. Gal
Adam Foster
Tom Rainforth
29
29
0
17 Apr 2023
Analysis of Interpolating Regression Models and the Double Descent
  Phenomenon
Analysis of Interpolating Regression Models and the Double Descent Phenomenon
T. McKelvey
4
0
0
17 Apr 2023
Mathematical Challenges in Deep Learning
Mathematical Challenges in Deep Learning
V. Nia
Guojun Zhang
I. Kobyzev
Michael R. Metel
Xinlin Li
...
S. Hemati
M. Asgharian
Linglong Kong
Wulong Liu
Boxing Chen
AI4CE
VLM
37
1
0
24 Mar 2023
Online Learning for the Random Feature Model in the Student-Teacher
  Framework
Online Learning for the Random Feature Model in the Student-Teacher Framework
Roman Worschech
B. Rosenow
41
0
0
24 Mar 2023
Lower bounds for the trade-off between bias and mean absolute deviation
Lower bounds for the trade-off between bias and mean absolute deviation
A. Derumigny
Johannes Schmidt-Hieber
27
0
0
21 Mar 2023
Memorization Capacity of Neural Networks with Conditional Computation
Memorization Capacity of Neural Networks with Conditional Computation
Erdem Koyuncu
30
4
0
20 Mar 2023
Deep Learning Weight Pruning with RMT-SVD: Increasing Accuracy and
  Reducing Overfitting
Deep Learning Weight Pruning with RMT-SVD: Increasing Accuracy and Reducing Overfitting
Yitzchak Shmalo
Jonathan Jenkins
Oleksii Krupchytskyi
22
3
0
15 Mar 2023
Tradeoff of generalization error in unsupervised learning
Tradeoff of generalization error in unsupervised learning
Gilhan Kim
Ho-Jun Lee
Junghyo Jo
Yongjoo Baek
13
0
0
10 Mar 2023
DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural
  Network Worry-Free?
DSD2^22: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
26
7
0
02 Mar 2023
Penalising the biases in norm regularisation enforces sparsity
Penalising the biases in norm regularisation enforces sparsity
Etienne Boursier
Nicolas Flammarion
34
14
0
02 Mar 2023
Approximately optimal domain adaptation with Fisher's Linear
  Discriminant
Approximately optimal domain adaptation with Fisher's Linear Discriminant
Hayden S. Helm
Ashwin De Silva
Joshua T. Vogelstein
Carey E. Priebe
Weiwei Yang
29
2
0
27 Feb 2023
Can we avoid Double Descent in Deep Neural Networks?
Can we avoid Double Descent in Deep Neural Networks?
Victor Quétu
Enzo Tartaglione
AI4CE
20
3
0
26 Feb 2023
Precise Asymptotic Analysis of Deep Random Feature Models
Precise Asymptotic Analysis of Deep Random Feature Models
David Bosch
Ashkan Panahi
B. Hassibi
35
19
0
13 Feb 2023
Better Diffusion Models Further Improve Adversarial Training
Better Diffusion Models Further Improve Adversarial Training
Zekai Wang
Tianyu Pang
Chao Du
Min-Bin Lin
Weiwei Liu
Shuicheng Yan
DiffM
24
208
0
09 Feb 2023
Pathologies of Predictive Diversity in Deep Ensembles
Pathologies of Predictive Diversity in Deep Ensembles
Taiga Abe
E. Kelly Buchanan
Geoff Pleiss
John P. Cunningham
UQCV
38
13
0
01 Feb 2023
Implicit Regularization Leads to Benign Overfitting for Sparse Linear
  Regression
Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression
Mo Zhou
Rong Ge
27
2
0
01 Feb 2023
MOSAIC, acomparison framework for machine learning models
MOSAIC, acomparison framework for machine learning models
Mattéo Papin
Yann Beaujeault-Taudiere
F. Magniette
VLM
16
0
0
30 Jan 2023
On the Lipschitz Constant of Deep Networks and Double Descent
On the Lipschitz Constant of Deep Networks and Double Descent
Matteo Gamba
Hossein Azizpour
Marten Bjorkman
25
7
0
28 Jan 2023
A Simple Algorithm For Scaling Up Kernel Methods
A Simple Algorithm For Scaling Up Kernel Methods
Tengyu Xu
Bryan T. Kelly
Semyon Malamud
11
0
0
26 Jan 2023
A Stability Analysis of Fine-Tuning a Pre-Trained Model
A Stability Analysis of Fine-Tuning a Pre-Trained Model
Z. Fu
Anthony Man-Cho So
Nigel Collier
23
3
0
24 Jan 2023
Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural
  Networks
Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural Networks
Fenglei Fan
Yingxin Li
Hanchuan Peng
T. Zeng
Fei-Yue Wang
22
5
0
23 Jan 2023
Strong inductive biases provably prevent harmless interpolation
Strong inductive biases provably prevent harmless interpolation
Michael Aerni
Marco Milanta
Konstantin Donhauser
Fanny Yang
30
9
0
18 Jan 2023
WLD-Reg: A Data-dependent Within-layer Diversity Regularizer
WLD-Reg: A Data-dependent Within-layer Diversity Regularizer
Firas Laakom
Jenni Raitoharju
Alexandros Iosifidis
M. Gabbouj
AI4CE
26
7
0
03 Jan 2023
Problem-Dependent Power of Quantum Neural Networks on Multi-Class
  Classification
Problem-Dependent Power of Quantum Neural Networks on Multi-Class Classification
Yuxuan Du
Yibo Yang
Dacheng Tao
Min-hsiu Hsieh
36
22
0
29 Dec 2022
Gradient flow in the gaussian covariate model: exact solution of
  learning curves and multiple descent structures
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
34
4
0
13 Dec 2022
Tight bounds for maximum $\ell_1$-margin classifiers
Tight bounds for maximum ℓ1\ell_1ℓ1​-margin classifiers
Stefan Stojanovic
Konstantin Donhauser
Fanny Yang
40
0
0
07 Dec 2022
High Dimensional Binary Classification under Label Shift: Phase
  Transition and Regularization
High Dimensional Binary Classification under Label Shift: Phase Transition and Regularization
Jiahui Cheng
Minshuo Chen
Hao Liu
Tuo Zhao
Wenjing Liao
34
0
0
01 Dec 2022
Task Discovery: Finding the Tasks that Neural Networks Generalize on
Task Discovery: Finding the Tasks that Neural Networks Generalize on
Andrei Atanov
Andrei Filatov
Teresa Yeo
Ajay Sohmshetty
Amir Zamir
OOD
40
10
0
01 Dec 2022
Why Neural Networks Work
Why Neural Networks Work
Sayan Mukherjee
Bernardo A. Huberman
11
2
0
26 Nov 2022
The Vanishing Decision Boundary Complexity and the Strong First
  Component
The Vanishing Decision Boundary Complexity and the Strong First Component
Hengshuai Yao
UQCV
30
0
0
25 Nov 2022
A Survey of Learning Curves with Bad Behavior: or How More Data Need Not
  Lead to Better Performance
A Survey of Learning Curves with Bad Behavior: or How More Data Need Not Lead to Better Performance
Marco Loog
T. Viering
21
1
0
25 Nov 2022
Understanding the double descent curve in Machine Learning
Understanding the double descent curve in Machine Learning
Luis Sa-Couto
J. M. Ramos
Miguel Almeida
Andreas Wichert
27
1
0
18 Nov 2022
Emergence of Concepts in DNNs?
Emergence of Concepts in DNNs?
Tim Räz
19
0
0
11 Nov 2022
Do highly over-parameterized neural networks generalize since bad
  solutions are rare?
Do highly over-parameterized neural networks generalize since bad solutions are rare?
Julius Martinetz
T. Martinetz
22
1
0
07 Nov 2022
Previous
123456
Next