ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.17002
  4. Cited By
Understanding and Mitigating the Label Noise in Pre-training on
  Downstream Tasks

Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks

29 September 2023
Hao Chen
Jindong Wang
Ankit Shah
Ran Tao
Hongxin Wei
Berfin cSimcsek
Masashi Sugiyama
Bhiksha Raj
ArXivPDFHTML

Papers citing "Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks"

30 / 30 papers shown
Title
Model Hemorrhage and the Robustness Limits of Large Language Models
Model Hemorrhage and the Robustness Limits of Large Language Models
Ziyang Ma
Z. Li
L. Zhang
Gui-Song Xia
Bo Du
Liangpei Zhang
Dacheng Tao
50
0
0
31 Mar 2025
A Language Anchor-Guided Method for Robust Noisy Domain Generalization
A Language Anchor-Guided Method for Robust Noisy Domain Generalization
Zilin Dai
Lehong Wang
Fangzhou Lin
Yidong Wang
Zhigang Li
Kazunori D Yamada
Ziming Zhang
Wang Lu
51
0
0
21 Mar 2025
BA-LoRA: Bias-Alleviating Low-Rank Adaptation to Mitigate Catastrophic Inheritance in Large Language Models
BA-LoRA: Bias-Alleviating Low-Rank Adaptation to Mitigate Catastrophic Inheritance in Large Language Models
Yupeng Chang
Yi-Ju Chang
Yuan Wu
AI4CE
ALM
74
0
0
24 Feb 2025
Do we really have to filter out random noise in pre-training data for language models?
Do we really have to filter out random noise in pre-training data for language models?
Jinghan Ru
Yuxin Xie
Xianwei Zhuang
Yuguo Yin
Yuexian Zou
83
2
0
10 Feb 2025
Bayesian-guided Label Mapping for Visual Reprogramming
Bayesian-guided Label Mapping for Visual Reprogramming
C. Cai
Zesheng Ye
Lei Feng
Jianzhong Qi
Feng Liu
29
2
0
31 Oct 2024
Why pre-training is beneficial for downstream classification tasks?
Why pre-training is beneficial for downstream classification tasks?
Xin Jiang
Xu Cheng
Zechao Li
24
0
0
11 Oct 2024
UniTalker: Scaling up Audio-Driven 3D Facial Animation through A Unified
  Model
UniTalker: Scaling up Audio-Driven 3D Facial Animation through A Unified Model
Xiangyu Fan
Jiaqi Li
Zhiqian Lin
Weiye Xiao
Lei Yang
CVBM
VGen
26
3
0
01 Aug 2024
LEMoN: Label Error Detection using Multimodal Neighbors
LEMoN: Label Error Detection using Multimodal Neighbors
Haoran Zhang
Aparna Balagopalan
Nassim Oufattole
Hyewon Jeong
Yan Wu
Jiacheng Zhu
Marzyeh Ghassemi
42
0
0
10 Jul 2024
Light-weight Fine-tuning Method for Defending Adversarial Noise in Pre-trained Medical Vision-Language Models
Light-weight Fine-tuning Method for Defending Adversarial Noise in Pre-trained Medical Vision-Language Models
Xu Han
Linghao Jin
Xuezhe Ma
Xiaofeng Liu
AAML
23
3
0
02 Jul 2024
Slight Corruption in Pre-training Data Makes Better Diffusion Models
Slight Corruption in Pre-training Data Makes Better Diffusion Models
Hao Chen
Yujin Han
Diganta Misra
Xiang Li
Kai Hu
Difan Zou
Masashi Sugiyama
Jindong Wang
Bhiksha Raj
DiffM
43
5
0
30 May 2024
Can We Treat Noisy Labels as Accurate?
Can We Treat Noisy Labels as Accurate?
Yuxiang Zheng
Zhongyi Han
Yilong Yin
Xin Gao
Tongliang Liu
25
1
0
21 May 2024
On Catastrophic Inheritance of Large Foundation Models
On Catastrophic Inheritance of Large Foundation Models
Hao Chen
Bhiksha Raj
Xing Xie
Jindong Wang
AI4CE
48
12
0
02 Feb 2024
The Risk of Federated Learning to Skew Fine-Tuning Features and
  Underperform Out-of-Distribution Robustness
The Risk of Federated Learning to Skew Fine-Tuning Features and Underperform Out-of-Distribution Robustness
Mengyao Du
Miao Zhang
Yuwen Pu
Kai Xu
Shouling Ji
Quanjun Yin
8
1
0
25 Jan 2024
Unmasking and Improving Data Credibility: A Study with Datasets for
  Training Harmless Language Models
Unmasking and Improving Data Credibility: A Study with Datasets for Training Harmless Language Models
Zhaowei Zhu
Jialu Wang
Hao Cheng
Yang Liu
11
14
0
19 Nov 2023
BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning
BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning
Changdae Oh
Hyeji Hwang
Hee-young Lee
Yongtaek Lim
Geunyoung Jung
Jiyoung Jung
Hosik Choi
Kyungwoo Song
VLM
VPVLM
78
54
0
26 Mar 2023
FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
Yidong Wang
Hao Chen
Qiang Heng
Wenxin Hou
Yue Fan
...
Marios Savvides
T. Shinozaki
Bhiksha Raj
Bernt Schiele
Xing Xie
175
251
0
15 May 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo
  Labeling
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling
Bowen Zhang
Yidong Wang
Wenxin Hou
Hao Wu
Jindong Wang
Manabu Okumura
T. Shinozaki
AAML
213
848
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
780
0
14 Oct 2021
ResNet strikes back: An improved training procedure in timm
ResNet strikes back: An improved training procedure in timm
Ross Wightman
Hugo Touvron
Hervé Jégou
AI4TS
198
477
0
01 Oct 2021
Deduplicating Training Data Makes Language Models Better
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
237
588
0
14 Jul 2021
ImageNet-21K Pretraining for the Masses
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
154
676
0
22 Apr 2021
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize
  Long-Tail Visual Concepts
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts
Soravit Changpinyo
P. Sharma
Nan Ding
Radu Soricut
VLM
273
1,077
0
17 Feb 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy
  Text Supervision
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
293
3,683
0
11 Feb 2021
Provably End-to-end Label-Noise Learning without Anchor Points
Provably End-to-end Label-Noise Learning without Anchor Points
Xuefeng Li
Tongliang Liu
Bo Han
Gang Niu
Masashi Sugiyama
NoLa
112
119
0
04 Feb 2021
Re-labeling ImageNet: from Single to Multi-Labels, from Global to
  Localized Labels
Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels
Sangdoo Yun
Seong Joon Oh
Byeongho Heo
Dongyoon Han
Junsuk Choe
Sanghyuk Chun
384
139
0
13 Jan 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
223
4,424
0
23 Jan 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
Xception: Deep Learning with Depthwise Separable Convolutions
Xception: Deep Learning with Depthwise Separable Convolutions
François Chollet
MDE
BDL
PINN
201
14,190
0
07 Oct 2016
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
279
39,083
0
01 Sep 2014
1