ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11368
  4. Cited By
Simple and Effective Regularization Methods for Training on Noisily
  Labeled Data with Generalization Guarantee
v1v2v3v4 (latest)

Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee

27 May 2019
Wei Hu
Zhiyuan Li
Dingli Yu
    NoLa
ArXiv (abs)PDFHTML

Papers citing "Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee"

11 / 11 papers shown
Accountability in an Algorithmic Society: Relationality, Responsibility,
  and Robustness in Machine Learning
Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine LearningConference on Fairness, Accountability and Transparency (FAccT), 2022
A. Feder Cooper
Emanuel Moss
Benjamin Laufer
Helen Nissenbaum
MLAU
386
122
0
10 Feb 2022
Weighted Neural Tangent Kernel: A Generalized and Improved
  Network-Induced Kernel
Weighted Neural Tangent Kernel: A Generalized and Improved Network-Induced KernelMachine-mediated learning (ML), 2021
Lei Tan
Shutong Wu
Xiaolin Huang
171
3
0
22 Mar 2021
Coresets for Robust Training of Neural Networks against Noisy Labels
Coresets for Robust Training of Neural Networks against Noisy Labels
Baharan Mirzasoleiman
Kaidi Cao
J. Leskovec
NoLa
197
34
0
15 Nov 2020
Finite Versus Infinite Neural Networks: an Empirical Study
Finite Versus Infinite Neural Networks: an Empirical StudyNeural Information Processing Systems (NeurIPS), 2020
Jaehoon Lee
S. Schoenholz
Jeffrey Pennington
Ben Adlam
Lechao Xiao
Roman Novak
Jascha Narain Sohl-Dickstein
435
232
0
31 Jul 2020
Knowledge Distillation Beyond Model Compression
Knowledge Distillation Beyond Model Compression
F. Sarfraz
Elahe Arani
Bahram Zonooz
185
45
0
03 Jul 2020
CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information
CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information
Pengyu Cheng
Weituo Hao
Shuyang Dai
Jiachang Liu
Zhe Gan
Lawrence Carin
VLM
537
481
0
22 Jun 2020
Learning Not to Learn in the Presence of Noisy Labels
Learning Not to Learn in the Presence of Noisy Labels
Liu Ziyin
Blair Chen
Ru Wang
Paul Pu Liang
Ruslan Salakhutdinov
Louis-Philippe Morency
Masahito Ueda
NoLa
181
19
0
16 Feb 2020
Noise as a Resource for Learning in Knowledge Distillation
Noise as a Resource for Learning in Knowledge Distillation
Elahe Arani
F. Sarfraz
Bahram Zonooz
191
6
0
11 Oct 2019
Beyond Linearization: On Quadratic and Higher-Order Approximation of
  Wide Neural Networks
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural NetworksInternational Conference on Learning Representations (ICLR), 2019
Yu Bai
Jason D. Lee
244
126
0
03 Oct 2019
Distillation $\approx$ Early Stopping? Harvesting Dark Knowledge
  Utilizing Anisotropic Information Retrieval For Overparameterized Neural
  Network
Distillation ≈\approx≈ Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized Neural Network
Bin Dong
Jikai Hou
Yiping Lu
Zhihua Zhang
210
44
0
02 Oct 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
524
386
0
27 Mar 2019
1
Page 1 of 1