ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.00925
  4. Cited By
SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning

SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning

2 December 2020
Zhuowei Wang
Jing Jiang
Bo Han
Lei Feng
Bo An
Gang Niu
Guodong Long
    NoLa
ArXivPDFHTML

Papers citing "SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning"

7 / 7 papers shown
Title
Is one annotation enough? A data-centric image classification benchmark
  for noisy and ambiguous label estimation
Is one annotation enough? A data-centric image classification benchmark for noisy and ambiguous label estimation
Lars Schmarje
Vasco Grossmann
Claudius Zelenka
S. Dippel
R. Kiko
...
M. Pastell
J. Stracke
A. Valros
N. Volkmann
Reinahrd Koch
37
34
0
13 Jul 2022
FedNoiL: A Simple Two-Level Sampling Method for Federated Learning with
  Noisy Labels
FedNoiL: A Simple Two-Level Sampling Method for Federated Learning with Noisy Labels
Zhuowei Wang
Tianyi Zhou
Guodong Long
Bo Han
Jing Jiang
FedML
27
19
0
20 May 2022
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo
  Labeling
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling
Bowen Zhang
Yidong Wang
Wenxin Hou
Hao Wu
Jindong Wang
Manabu Okumura
T. Shinozaki
AAML
226
862
0
15 Oct 2021
Towards Understanding Deep Learning from Noisy Labels with Small-Loss
  Criterion
Towards Understanding Deep Learning from Noisy Labels with Small-Loss Criterion
Xian-Jin Gui
Wei Wang
Zhang-Hao Tian
NoLa
25
44
0
17 Jun 2021
Combating noisy labels by agreement: A joint training method with
  co-regularization
Combating noisy labels by agreement: A joint training method with co-regularization
Hongxin Wei
Lei Feng
Xiangyu Chen
Bo An
NoLa
310
497
0
05 Mar 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
317
11,681
0
09 Mar 2017
Mean teachers are better role models: Weight-averaged consistency
  targets improve semi-supervised deep learning results
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
261
1,275
0
06 Mar 2017
1