ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.11680
  4. Cited By
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks

Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks

27 March 2019
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
    NoLa
ArXivPDFHTML

Papers citing "Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"

44 / 44 papers shown
Title
Paint Outside the Box: Synthesizing and Selecting Training Data for Visual Grounding
Paint Outside the Box: Synthesizing and Selecting Training Data for Visual Grounding
Zilin Du
Haoxin Li
Jianfei Yu
Boyang Li
132
0
0
01 Dec 2024
Sharper Guarantees for Learning Neural Network Classifiers with Gradient
  Methods
Sharper Guarantees for Learning Neural Network Classifiers with Gradient Methods
Hossein Taheri
Christos Thrampoulidis
Arya Mazumdar
MLT
31
0
0
13 Oct 2024
Training on Synthetic Data Beats Real Data in Multimodal Relation
  Extraction
Training on Synthetic Data Beats Real Data in Multimodal Relation Extraction
Zilin Du
Haoxin Li
Xu Guo
Boyang Li
29
1
0
05 Dec 2023
TouchUp-G: Improving Feature Representation through Graph-Centric Finetuning
TouchUp-G: Improving Feature Representation through Graph-Centric Finetuning
Jing Zhu
Xiang Song
V. Ioannidis
Danai Koutra
Christos Faloutsos
54
13
0
25 Sep 2023
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Yehonatan Avidan
Qianyi Li
H. Sompolinsky
60
8
0
08 Sep 2023
Double Descent of Discrepancy: A Task-, Data-, and Model-Agnostic
  Phenomenon
Double Descent of Discrepancy: A Task-, Data-, and Model-Agnostic Phenomenon
Yi-Xiao Luo
Bin Dong
26
0
0
25 May 2023
Learning with Noisy Labels through Learnable Weighting and Centroid
  Similarity
Learning with Noisy Labels through Learnable Weighting and Centroid Similarity
F. Wani
Maria Sofia Bucarelli
Fabrizio Silvestri
NoLa
29
3
0
16 Mar 2023
Characterizing the Spectrum of the NTK via a Power Series Expansion
Characterizing the Spectrum of the NTK via a Power Series Expansion
Michael Murray
Hui Jin
Benjamin Bowman
Guido Montúfar
30
11
0
15 Nov 2022
Instance-Dependent Generalization Bounds via Optimal Transport
Instance-Dependent Generalization Bounds via Optimal Transport
Songyan Hou
Parnian Kassraie
Anastasis Kratsios
Andreas Krause
Jonas Rothfuss
20
6
0
02 Nov 2022
Automatic Data Augmentation via Invariance-Constrained Learning
Automatic Data Augmentation via Invariance-Constrained Learning
Ignacio Hounie
Luiz F. O. Chamon
Alejandro Ribeiro
18
10
0
29 Sep 2022
Metadata Archaeology: Unearthing Data Subsets by Leveraging Training
  Dynamics
Metadata Archaeology: Unearthing Data Subsets by Leveraging Training Dynamics
Shoaib Ahmed Siddiqui
Nitarshan Rajkumar
Tegan Maharaj
David M. Krueger
Sara Hooker
35
27
0
20 Sep 2022
On the Activation Function Dependence of the Spectral Bias of Neural
  Networks
On the Activation Function Dependence of the Spectral Bias of Neural Networks
Q. Hong
Jonathan W. Siegel
Qinyan Tan
Jinchao Xu
32
22
0
09 Aug 2022
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Zhengqi He
Zeke Xie
Quanzhi Zhu
Zengchang Qin
67
27
0
17 Jun 2022
Robust Meta-learning with Sampling Noise and Label Noise via
  Eigen-Reptile
Robust Meta-learning with Sampling Noise and Label Noise via Eigen-Reptile
Dong Chen
Lingfei Wu
Siliang Tang
Xiao Yun
Bo Long
Yueting Zhuang
VLM
NoLa
19
9
0
04 Jun 2022
Random Feature Amplification: Feature Learning and Generalization in
  Neural Networks
Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
27
29
0
15 Feb 2022
Maximum Likelihood Uncertainty Estimation: Robustness to Outliers
Maximum Likelihood Uncertainty Estimation: Robustness to Outliers
Deebul Nair
Nico Hochgeschwender
Miguel A. Olivares-Mendez
OOD
22
7
0
03 Feb 2022
Do We Need to Penalize Variance of Losses for Learning with Label Noise?
Do We Need to Penalize Variance of Losses for Learning with Label Noise?
Yexiong Lin
Yu Yao
Yuxuan Du
Jun Yu
Bo Han
Mingming Gong
Tongliang Liu
NoLa
33
3
0
30 Jan 2022
A Stochastic Bundle Method for Interpolating Networks
A Stochastic Bundle Method for Interpolating Networks
Alasdair Paren
Leonard Berrada
Rudra P. K. Poudel
M. P. Kumar
24
4
0
29 Jan 2022
Overview frequency principle/spectral bias in deep learning
Overview frequency principle/spectral bias in deep learning
Z. Xu
Yaoyu Zhang
Tao Luo
FaML
25
64
0
19 Jan 2022
In Defense of the Unitary Scalarization for Deep Multi-Task Learning
In Defense of the Unitary Scalarization for Deep Multi-Task Learning
Vitaly Kurin
Alessandro De Palma
Ilya Kostrikov
Shimon Whiteson
M. P. Kumar
28
72
0
11 Jan 2022
Rethinking Influence Functions of Neural Networks in the
  Over-parameterized Regime
Rethinking Influence Functions of Neural Networks in the Over-parameterized Regime
Rui Zhang
Shihua Zhang
TDI
13
21
0
15 Dec 2021
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
27
16
0
05 Dec 2021
Constrained Instance and Class Reweighting for Robust Learning under
  Label Noise
Constrained Instance and Class Reweighting for Robust Learning under Label Noise
Abhishek Kumar
Ehsan Amid
NoLa
25
19
0
09 Nov 2021
Subquadratic Overparameterization for Shallow Neural Networks
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
V. Cevher
22
31
0
02 Nov 2021
Mitigating Memorization of Noisy Labels via Regularization between
  Representations
Mitigating Memorization of Noisy Labels via Regularization between Representations
Hao Cheng
Zhaowei Zhu
Xing Sun
Yang Liu
NoLa
33
28
0
18 Oct 2021
Pro-KD: Progressive Distillation by Following the Footsteps of the
  Teacher
Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher
Mehdi Rezagholizadeh
A. Jafari
Puneeth Salad
Pranav Sharma
Ali Saheb Pasand
A. Ghodsi
71
17
0
16 Oct 2021
Robustness and Reliability When Training With Noisy Labels
Robustness and Reliability When Training With Noisy Labels
Amanda Olmin
Fredrik Lindsten
OOD
NoLa
16
14
0
07 Oct 2021
Learning with Noisy Labels via Sparse Regularization
Learning with Noisy Labels via Sparse Regularization
Xiong Zhou
Xianming Liu
Chenyang Wang
Deming Zhai
Junjun Jiang
Xiangyang Ji
NoLa
26
51
0
31 Jul 2021
A Theoretical Analysis of Fine-tuning with Linear Teachers
A Theoretical Analysis of Fine-tuning with Linear Teachers
Gal Shachaf
Alon Brutzkus
Amir Globerson
26
17
0
04 Jul 2021
Towards Understanding Deep Learning from Noisy Labels with Small-Loss
  Criterion
Towards Understanding Deep Learning from Noisy Labels with Small-Loss Criterion
Xian-Jin Gui
Wei Wang
Zhang-Hao Tian
NoLa
22
44
0
17 Jun 2021
RATT: Leveraging Unlabeled Data to Guarantee Generalization
RATT: Leveraging Unlabeled Data to Guarantee Generalization
Saurabh Garg
Sivaraman Balakrishnan
J. Zico Kolter
Zachary Chase Lipton
25
29
0
01 May 2021
Generalization Guarantees for Neural Architecture Search with
  Train-Validation Split
Generalization Guarantees for Neural Architecture Search with Train-Validation Split
Samet Oymak
Mingchen Li
Mahdi Soltanolkotabi
AI4CE
OOD
28
13
0
29 Apr 2021
Provable Super-Convergence with a Large Cyclical Learning Rate
Provable Super-Convergence with a Large Cyclical Learning Rate
Samet Oymak
25
12
0
22 Feb 2021
Advances in Electron Microscopy with Deep Learning
Advances in Electron Microscopy with Deep Learning
Jeffrey M. Ede
27
2
0
04 Jan 2021
A Survey of Label-noise Representation Learning: Past, Present and
  Future
A Survey of Label-noise Representation Learning: Past, Present and Future
Bo Han
Quanming Yao
Tongliang Liu
Gang Niu
Ivor W. Tsang
James T. Kwok
Masashi Sugiyama
NoLa
24
158
0
09 Nov 2020
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
22
79
0
17 Sep 2020
How benign is benign overfitting?
How benign is benign overfitting?
Amartya Sanyal
P. Dokania
Varun Kanade
Philip H. S. Torr
NoLa
AAML
23
57
0
08 Jul 2020
Generalisation Guarantees for Continual Learning with Orthogonal
  Gradient Descent
Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent
Mehdi Abbana Bennani
Thang Doan
Masashi Sugiyama
CLL
42
61
0
21 Jun 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
26
32
0
18 Jun 2020
Part-dependent Label Noise: Towards Instance-dependent Label Noise
Part-dependent Label Noise: Towards Instance-dependent Label Noise
Xiaobo Xia
Tongliang Liu
Bo Han
Nannan Wang
Mingming Gong
Haifeng Liu
Gang Niu
Dacheng Tao
Masashi Sugiyama
NoLa
6
67
0
14 Jun 2020
LOCA: LOcal Conformal Autoencoder for standardized data coordinates
LOCA: LOcal Conformal Autoencoder for standardized data coordinates
Erez Peterfreund
Ofir Lindenbaum
Felix Dietrich
Tom S. Bertalan
M. Gavish
Ioannis G. Kevrekidis
Ronald R. Coifman
71
22
0
15 Apr 2020
Learning Not to Learn in the Presence of Noisy Labels
Learning Not to Learn in the Presence of Noisy Labels
Liu Ziyin
Blair Chen
Ru Wang
Paul Pu Liang
Ruslan Salakhutdinov
Louis-Philippe Morency
Masahito Ueda
NoLa
16
18
0
16 Feb 2020
Denoising and Regularization via Exploiting the Structural Bias of
  Convolutional Generators
Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Reinhard Heckel
Mahdi Soltanolkotabi
DiffM
27
81
0
31 Oct 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,886
0
15 Sep 2016
1