ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.05822
  4. Cited By
A Model of Double Descent for High-dimensional Binary Linear
  Classification

A Model of Double Descent for High-dimensional Binary Linear Classification

13 November 2019
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
ArXivPDFHTML

Papers citing "A Model of Double Descent for High-dimensional Binary Linear Classification"

20 / 20 papers shown
Title
The Double Descent Behavior in Two Layer Neural Network for Binary Classification
The Double Descent Behavior in Two Layer Neural Network for Binary Classification
Chathurika S Abeykoon
A. Beknazaryan
Hailin Sang
41
1
0
27 Apr 2025
Gibbs-Based Information Criteria and the Over-Parameterized Regime
Gibbs-Based Information Criteria and the Over-Parameterized Regime
Haobo Chen
Yuheng Bu
Greg Wornell
21
1
0
08 Jun 2023
Unraveling Projection Heads in Contrastive Learning: Insights from
  Expansion and Shrinkage
Unraveling Projection Heads in Contrastive Learning: Insights from Expansion and Shrinkage
Yu Gui
Cong Ma
Yiqiao Zhong
17
6
0
06 Jun 2023
Gradient flow in the gaussian covariate model: exact solution of
  learning curves and multiple descent structures
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
19
4
0
13 Dec 2022
Tight bounds for maximum $\ell_1$-margin classifiers
Tight bounds for maximum ℓ1\ell_1ℓ1​-margin classifiers
Stefan Stojanovic
Konstantin Donhauser
Fanny Yang
27
0
0
07 Dec 2022
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized
  Linear Models
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models
Lijia Zhou
Frederic Koehler
Pragya Sur
Danica J. Sutherland
Nathan Srebro
83
9
0
21 Oct 2022
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Niladri S. Chatterji
Philip M. Long
13
8
0
19 Sep 2022
Differentially Private Regression with Unbounded Covariates
Differentially Private Regression with Unbounded Covariates
Jason Milionis
Alkis Kalavasis
Dimitris Fotakis
Stratis Ioannidis
15
10
0
19 Feb 2022
Model, sample, and epoch-wise descents: exact solution of gradient flow
  in the random feature model
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
24
13
0
22 Oct 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
19
71
0
06 Sep 2021
Interpolation can hurt robust generalization even when there is no noise
Interpolation can hurt robust generalization even when there is no noise
Konstantin Donhauser
Alexandru cTifrea
Michael Aerni
Reinhard Heckel
Fanny Yang
26
14
0
05 Aug 2021
Double Descent and Other Interpolation Phenomena in GANs
Double Descent and Other Interpolation Phenomena in GANs
Lorenzo Luzi
Yehuda Dar
Richard Baraniuk
8
5
0
07 Jun 2021
Towards an Understanding of Benign Overfitting in Neural Networks
Towards an Understanding of Benign Overfitting in Neural Networks
Zhu Li
Zhi-Hua Zhou
A. Gretton
MLT
31
35
0
06 Jun 2021
AdaBoost and robust one-bit compressed sensing
AdaBoost and robust one-bit compressed sensing
Geoffrey Chinot
Felix Kuchelmeister
Matthias Löffler
Sara van de Geer
27
5
0
05 May 2021
Label-Imbalanced and Group-Sensitive Classification under
  Overparameterization
Label-Imbalanced and Group-Sensitive Classification under Overparameterization
Ganesh Ramachandra Kini
Orestis Paraskevas
Samet Oymak
Christos Thrampoulidis
17
93
0
02 Mar 2021
Precise Statistical Analysis of Classification Accuracies for
  Adversarial Training
Precise Statistical Analysis of Classification Accuracies for Adversarial Training
Adel Javanmard
Mahdi Soltanolkotabi
AAML
19
61
0
21 Oct 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
24
32
0
18 Jun 2020
Classification vs regression in overparameterized regimes: Does the loss
  function matter?
Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
25
148
0
16 May 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
83
152
0
02 Mar 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and
  Minimum-$\ell_1$-Norm Interpolated Classifiers
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-ℓ1\ell_1ℓ1​-Norm Interpolated Classifiers
Tengyuan Liang
Pragya Sur
17
68
0
05 Feb 2020
1