ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.10099
  4. Cited By
Early Stopping in Deep Networks: Double Descent and How to Eliminate it

Early Stopping in Deep Networks: Double Descent and How to Eliminate it

20 July 2020
Reinhard Heckel
Fatih Yilmaz
ArXivPDFHTML

Papers citing "Early Stopping in Deep Networks: Double Descent and How to Eliminate it"

13 / 13 papers shown
Title
A dynamic view of the double descent
A dynamic view of the double descent
Vivek Shripad Borkar
58
0
0
03 May 2025
On the Relationship Between Double Descent of CNNs and Shape/Texture Bias Under Learning Process
Shun Iwase
Shuya Takahashi
Nakamasa Inoue
Rio Yokota
Ryo Nakamura
Hirokatsu Kataoka
74
0
0
04 Mar 2025
Gibbs-Based Information Criteria and the Over-Parameterized Regime
Gibbs-Based Information Criteria and the Over-Parameterized Regime
Haobo Chen
Yuheng Bu
Greg Wornell
21
1
0
08 Jun 2023
Double Descent of Discrepancy: A Task-, Data-, and Model-Agnostic
  Phenomenon
Double Descent of Discrepancy: A Task-, Data-, and Model-Agnostic Phenomenon
Yi-Xiao Luo
Bin Dong
26
0
0
25 May 2023
Unifying Grokking and Double Descent
Unifying Grokking and Double Descent
Peter W. Battaglia
David Raposo
Kelsey
34
31
0
10 Mar 2023
On the Impossible Safety of Large AI Models
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
30
31
0
30 Sep 2022
Information FOMO: The unhealthy fear of missing out on information. A
  method for removing misleading data for healthier models
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
16
6
0
27 Aug 2022
Image Augmentation for Satellite Images
Image Augmentation for Satellite Images
Oluwadara Adedeji
Peter Owoade
Op Ajayi
Olayiwola F Arowolo
8
9
0
29 Jul 2022
Regularization-wise double descent: Why it occurs and how to eliminate
  it
Regularization-wise double descent: Why it occurs and how to eliminate it
Fatih Yilmaz
Reinhard Heckel
25
11
0
03 Jun 2022
Data Determines Distributional Robustness in Contrastive Language Image
  Pre-training (CLIP)
Data Determines Distributional Robustness in Contrastive Language Image Pre-training (CLIP)
Alex Fang
Gabriel Ilharco
Mitchell Wortsman
Yu Wan
Vaishaal Shankar
Achal Dave
Ludwig Schmidt
VLM
OOD
31
138
0
03 May 2022
Provable Continual Learning via Sketched Jacobian Approximations
Provable Continual Learning via Sketched Jacobian Approximations
Reinhard Heckel
CLL
18
9
0
09 Dec 2021
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Mohammad Pezeshki
Amartya Mitra
Yoshua Bengio
Guillaume Lajoie
61
25
0
06 Dec 2021
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
1