ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.12024
  4. Cited By
NoisyTune: A Little Noise Can Help You Finetune Pretrained Language
  Models Better

NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better

24 February 2022
Chuhan Wu
Fangzhao Wu
Tao Qi
Yongfeng Huang
Xing Xie
ArXivPDFHTML

Papers citing "NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better"

33 / 33 papers shown
Title
Noisy Deep Ensemble: Accelerating Deep Ensemble Learning via Noise Injection
Noisy Deep Ensemble: Accelerating Deep Ensemble Learning via Noise Injection
Shunsuke Sakai
Shunsuke Tsuge
Tatsuhito Hasegawa
19
0
0
08 Apr 2025
ProtoBERT-LoRA: Parameter-Efficient Prototypical Finetuning for Immunotherapy Study Identification
ProtoBERT-LoRA: Parameter-Efficient Prototypical Finetuning for Immunotherapy Study Identification
Shijia Zhang
Xiyu Ding
Kai Ding
Jacob Zhang
Kevin Galinsky
Mengrui Wang
Ryan P. Mayers
Zheyu Wang
Hadi Kharrazi
68
0
0
26 Mar 2025
HaLoRA: Hardware-aware Low-Rank Adaptation for Large Language Models Based on Hybrid Compute-in-Memory Architecture
HaLoRA: Hardware-aware Low-Rank Adaptation for Large Language Models Based on Hybrid Compute-in-Memory Architecture
Taiqiang Wu
Chenchen Ding
Wenyong Zhou
Yuxin Cheng
Xincheng Feng
Shuqi Wang
Chufan Shi
Z. Liu
Ngai Wong
50
0
0
27 Feb 2025
VideoLights: Feature Refinement and Cross-Task Alignment Transformer for
  Joint Video Highlight Detection and Moment Retrieval
VideoLights: Feature Refinement and Cross-Task Alignment Transformer for Joint Video Highlight Detection and Moment Retrieval
Dhiman Paul
Md Rizwan Parvez
Nabeel Mohammed
Shafin Rahman
VGen
67
0
0
02 Dec 2024
Exploring Accuracy-Fairness Trade-off in Large Language Models
Exploring Accuracy-Fairness Trade-off in Large Language Models
Qingquan Zhang
Qiqi Duan
Bo Yuan
Yuhui Shi
J. Liu
67
0
0
21 Nov 2024
BiSSL: A Bilevel Optimization Framework for Enhancing the Alignment Between Self-Supervised Pre-Training and Downstream Fine-Tuning
BiSSL: A Bilevel Optimization Framework for Enhancing the Alignment Between Self-Supervised Pre-Training and Downstream Fine-Tuning
Gustav Wagner Zakarias
Lars Kai Hansen
Z. Tan
27
0
0
03 Oct 2024
Audio-Guided Fusion Techniques for Multimodal Emotion Analysis
Audio-Guided Fusion Techniques for Multimodal Emotion Analysis
Pujin Shi
Fei Gao
22
1
0
08 Sep 2024
Light-weight Fine-tuning Method for Defending Adversarial Noise in Pre-trained Medical Vision-Language Models
Light-weight Fine-tuning Method for Defending Adversarial Noise in Pre-trained Medical Vision-Language Models
Xu Han
Linghao Jin
Xuezhe Ma
Xiaofeng Liu
AAML
31
3
0
02 Jul 2024
Expressive and Generalizable Low-rank Adaptation for Large Models via
  Slow Cascaded Learning
Expressive and Generalizable Low-rank Adaptation for Large Models via Slow Cascaded Learning
Siwei Li
Yifan Yang
Yifei Shen
Fangyun Wei
Zongqing Lu
L. Qiu
Yuqing Yang
AI4CE
38
1
0
01 Jul 2024
Can Small Language Models Learn, Unlearn, and Retain Noise Patterns?
Can Small Language Models Learn, Unlearn, and Retain Noise Patterns?
Nicy Scaria
Silvester John Joseph Kennedy
Deepak N. Subramani
MU
19
2
0
01 Jul 2024
Fighting Randomness with Randomness: Mitigating Optimisation Instability
  of Fine-Tuning using Delayed Ensemble and Noisy Interpolation
Fighting Randomness with Randomness: Mitigating Optimisation Instability of Fine-Tuning using Delayed Ensemble and Noisy Interpolation
Branislav Pecher
Ján Cegin
Róbert Belanec
Jakub Simko
Ivan Srba
M. Bieliková
37
1
0
18 Jun 2024
Slight Corruption in Pre-training Data Makes Better Diffusion Models
Slight Corruption in Pre-training Data Makes Better Diffusion Models
Hao Chen
Yujin Han
Diganta Misra
Xiang Li
Kai Hu
Difan Zou
Masashi Sugiyama
Jindong Wang
Bhiksha Raj
DiffM
45
5
0
30 May 2024
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of
  Language Models
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of Language Models
Kang He
Yinghan Long
Kaushik Roy
21
2
0
15 Feb 2024
NoisyICL: A Little Noise in Model Parameters Calibrates In-context
  Learning
NoisyICL: A Little Noise in Model Parameters Calibrates In-context Learning
Yufeng Zhao
Yoshihiro Sakai
Naoya Inoue
31
3
0
08 Feb 2024
See the Unseen: Better Context-Consistent Knowledge-Editing by Noises
See the Unseen: Better Context-Consistent Knowledge-Editing by Noises
Youcheng Huang
Wenqiang Lei
Zheng-Wei Zhang
Jiancheng Lv
Shuicheng Yan
KELM
14
6
0
15 Jan 2024
Dynamic Corrective Self-Distillation for Better Fine-Tuning of
  Pretrained Models
Dynamic Corrective Self-Distillation for Better Fine-Tuning of Pretrained Models
Ibtihel Amara
Vinija Jain
Aman Chadha
28
0
0
12 Dec 2023
Controlled Randomness Improves the Performance of Transformer Models
Controlled Randomness Improves the Performance of Transformer Models
Tobias Deuβer
Cong Zhao
Wolfgang Krämer
David Leonhard
Christian Bauckhage
R. Sifa
19
1
0
20 Oct 2023
Unlocking Emergent Modularity in Large Language Models
Unlocking Emergent Modularity in Large Language Models
Zihan Qiu
Zeyu Huang
Jie Fu
20
8
0
17 Oct 2023
Understanding and Mitigating the Label Noise in Pre-training on
  Downstream Tasks
Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
Hao Chen
Jindong Wang
Ankit Shah
Ran Tao
Hongxin Wei
Berfin cSimcsek
Masashi Sugiyama
Bhiksha Raj
24
26
0
29 Sep 2023
Improving Video Colorization by Test-Time Tuning
Improving Video Colorization by Test-Time Tuning
Yaping Zhao
Haitian Zheng
Jiebo Luo
E. Lam
6
6
0
25 Jun 2023
Preserving Commonsense Knowledge from Pre-trained Language Models via
  Causal Inference
Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference
Junhao Zheng
Qianli Ma
Shengjie Qiu
Yue Wu
Peitian Ma
Junlong Liu
Hu Feng
Xichen Shang
Haibin Chen
AAML
KELM
CML
CLL
79
15
0
19 Jun 2023
Incorporating Distributions of Discourse Structure for Long Document
  Abstractive Summarization
Incorporating Distributions of Discourse Structure for Long Document Abstractive Summarization
Dongqi Pu
Yifa Wang
Vera Demberg
29
21
0
26 May 2023
Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net
  Estimation and Optimization
Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net Estimation and Optimization
Shoujie Tong
Heming Xia
Damai Dai
Runxin Xu
Tianyu Liu
Binghuai Lin
Yunbo Cao
Zhifang Sui
12
0
0
24 May 2023
Analyzing and Reducing the Performance Gap in Cross-Lingual Transfer
  with Fine-tuning Slow and Fast
Analyzing and Reducing the Performance Gap in Cross-Lingual Transfer with Fine-tuning Slow and Fast
Yiduo Guo
Yaobo Liang
Dongyan Zhao
Bin Liu
Du Nan
CLL
12
1
0
19 May 2023
HyPe: Better Pre-trained Language Model Fine-tuning with Hidden
  Representation Perturbation
HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation
Hongyi Yuan
Zheng Yuan
Chuanqi Tan
Fei Huang
Songfang Huang
21
15
0
17 Dec 2022
Prototypical Fine-tuning: Towards Robust Performance Under Varying Data
  Sizes
Prototypical Fine-tuning: Towards Robust Performance Under Varying Data Sizes
Yiqiao Jin
Xiting Wang
Y. Hao
Yizhou Sun
Xing Xie
28
11
0
24 Nov 2022
Exploring Mode Connectivity for Pre-trained Language Models
Exploring Mode Connectivity for Pre-trained Language Models
Yujia Qin
Cheng Qian
Jing Yi
Weize Chen
Yankai Lin
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
27
20
0
25 Oct 2022
PATS: Sensitivity-aware Noisy Learning for Pretrained Language Models
PATS: Sensitivity-aware Noisy Learning for Pretrained Language Models
Yupeng Zhang
Hongzhi Zhang
Sirui Wang
Wei Yu Wu
Zhoujun Li
AAML
12
1
0
22 Oct 2022
Improving Fine-tuning of Self-supervised Models with Contrastive
  Initialization
Improving Fine-tuning of Self-supervised Models with Contrastive Initialization
Haolin Pan
Yong Guo
Qinyi Deng
Hao-Fan Yang
Yiqun Chen
Jian Chen
SSL
13
19
0
30 Jul 2022
Raise a Child in Large Language Model: Towards Effective and
  Generalizable Fine-tuning
Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Runxin Xu
Fuli Luo
Zhiyuan Zhang
Chuanqi Tan
Baobao Chang
Songfang Huang
Fei Huang
LRM
136
178
0
13 Sep 2021
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
241
1,450
0
18 Mar 2020
Mixout: Effective Regularization to Finetune Large-scale Pretrained
  Language Models
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
235
205
0
25 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1