ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.05987
  4. Cited By
Revisiting Few-sample BERT Fine-tuning

Revisiting Few-sample BERT Fine-tuning

10 June 2020
Tianyi Zhang
Felix Wu
Arzoo Katiyar
Kilian Q. Weinberger
Yoav Artzi
ArXivPDFHTML

Papers citing "Revisiting Few-sample BERT Fine-tuning"

50 / 60 papers shown
Title
Fine-Tuning without Performance Degradation
Fine-Tuning without Performance Degradation
Han Wang
Adam White
Martha White
OnRL
101
0
0
01 May 2025
Memorization and Knowledge Injection in Gated LLMs
Memorization and Knowledge Injection in Gated LLMs
Xu Pan
Ely Hahami
Zechen Zhang
H. Sompolinsky
KELM
CLL
RALM
104
0
0
30 Apr 2025
Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models
Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models
Benjamin Laufer
Jon M. Kleinberg
Hoda Heidari
55
8
0
03 Jan 2025
Token-Efficient Leverage Learning in Large Language Models
Token-Efficient Leverage Learning in Large Language Models
Yuanhao Zeng
Min Wang
Yihang Wang
Yingxia Shao
29
0
0
01 Apr 2024
CLCE: An Approach to Refining Cross-Entropy and Contrastive Learning for
  Optimized Learning Fusion
CLCE: An Approach to Refining Cross-Entropy and Contrastive Learning for Optimized Learning Fusion
Zijun Long
George Killick
Lipeng Zhuang
Gerardo Aragon Camarasa
Zaiqiao Meng
R. McCreadie
VLM
37
2
0
22 Feb 2024
NoisyICL: A Little Noise in Model Parameters Calibrates In-context
  Learning
NoisyICL: A Little Noise in Model Parameters Calibrates In-context Learning
Yufeng Zhao
Yoshihiro Sakai
Naoya Inoue
25
3
0
08 Feb 2024
Text-To-KG Alignment: Comparing Current Methods on Classification Tasks
Text-To-KG Alignment: Comparing Current Methods on Classification Tasks
Sondre Wold
Lilja Ovrelid
Erik Velldal
14
3
0
05 Jun 2023
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and
  Zero-Shot Fact Verification with Pre-trained Language Models
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models
Fengzhu Zeng
Wei Gao
17
5
0
05 Jun 2023
The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder
  Models for More Efficient Code Classification
The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder Models for More Efficient Code Classification
Anastasiia Grishina
Max Hort
Leon Moonen
22
6
0
08 May 2023
Mask-guided BERT for Few Shot Text Classification
Mask-guided BERT for Few Shot Text Classification
Wenxiong Liao
Zheng Liu
Haixing Dai
Zihao Wu
Yiyang Zhang
...
Dajiang Zhu
Tianming Liu
Sheng R. Li
Xiang Li
Hongmin Cai
VLM
36
39
0
21 Feb 2023
Measuring the Instability of Fine-Tuning
Measuring the Instability of Fine-Tuning
Yupei Du
D. Nguyen
16
4
0
15 Feb 2023
Revisiting Intermediate Layer Distillation for Compressing Language
  Models: An Overfitting Perspective
Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Jongwoo Ko
Seungjoon Park
Minchan Jeong
S. Hong
Euijai Ahn
Duhyeuk Chang
Se-Young Yun
21
6
0
03 Feb 2023
KL Regularized Normalization Framework for Low Resource Tasks
KL Regularized Normalization Framework for Low Resource Tasks
Neeraj Kumar
Ankur Narang
Brejesh Lall
19
1
0
21 Dec 2022
DuNST: Dual Noisy Self Training for Semi-Supervised Controllable Text
  Generation
DuNST: Dual Noisy Self Training for Semi-Supervised Controllable Text Generation
Yuxi Feng
Xiaoyuan Yi
Xiting Wang
L. Lakshmanan
Xing Xie
DiffM
22
5
0
16 Dec 2022
Revisiting Distance Metric Learning for Few-Shot Natural Language
  Classification
Revisiting Distance Metric Learning for Few-Shot Natural Language Classification
Witold Sosnowski
Anna Wróblewska
Karolina Seweryn
P. Gawrysiak
8
0
0
28 Nov 2022
Distance Metric Learning Loss Functions in Few-Shot Scenarios of
  Supervised Language Models Fine-Tuning
Distance Metric Learning Loss Functions in Few-Shot Scenarios of Supervised Language Models Fine-Tuning
Witold Sosnowski
Karolina Seweryn
Anna Wróblewska
P. Gawrysiak
10
0
0
28 Nov 2022
Gradient Knowledge Distillation for Pre-trained Language Models
Gradient Knowledge Distillation for Pre-trained Language Models
Lean Wang
Lei Li
Xu Sun
VLM
16
5
0
02 Nov 2022
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning
Yaqing Wang
Sahaj Agarwal
Subhabrata Mukherjee
Xiaodong Liu
Jing Gao
Ahmed Hassan Awadallah
Jianfeng Gao
MoE
11
117
0
31 Oct 2022
STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot
  Classification
STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot Classification
Jinta Weng
Yue Hu
Jing Qiu
Heyan Huan
VLM
9
0
0
29 Oct 2022
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
Lan Jiang
Hao Zhou
Yankai Lin
Peng Li
Jie Zhou
R. Jiang
AAML
17
8
0
18 Oct 2022
Efficient Few-Shot Learning Without Prompts
Efficient Few-Shot Learning Without Prompts
Lewis Tunstall
Nils Reimers
Unso Eun Seo Jo
Luke Bates
Daniel Korat
Moshe Wasserblat
Oren Pereg
VLM
26
181
0
22 Sep 2022
Combating high variance in Data-Scarce Implicit Hate Speech
  Classification
Combating high variance in Data-Scarce Implicit Hate Speech Classification
Debaditya Pal
Kaustubh Chaudhari
Harsh Sharma
17
1
0
29 Aug 2022
Mere Contrastive Learning for Cross-Domain Sentiment Analysis
Mere Contrastive Learning for Cross-Domain Sentiment Analysis
Yun Luo
Fang Guo
Zihan Liu
Yue Zhang
22
15
0
18 Aug 2022
ELECTRA is a Zero-Shot Learner, Too
ELECTRA is a Zero-Shot Learner, Too
Shiwen Ni
Hung-Yu kao
14
9
0
17 Jul 2022
Few-Shot Natural Language Inference Generation with PDD: Prompt and
  Dynamic Demonstration
Few-Shot Natural Language Inference Generation with PDD: Prompt and Dynamic Demonstration
Kaijian Li
Shansan Gong
Kenny Q. Zhu
12
0
0
21 May 2022
A Comprehensive Survey of Few-shot Learning: Evolution, Applications,
  Challenges, and Opportunities
A Comprehensive Survey of Few-shot Learning: Evolution, Applications, Challenges, and Opportunities
Yisheng Song
Ting-Yuan Wang
S. Mondal
J. P. Sahoo
SLR
36
342
0
13 May 2022
Few-shot Mining of Naturally Occurring Inputs and Outputs
Few-shot Mining of Naturally Occurring Inputs and Outputs
Mandar Joshi
Terra Blevins
M. Lewis
Daniel S. Weld
Luke Zettlemoyer
19
1
0
09 May 2022
Embedding Hallucination for Few-Shot Language Fine-tuning
Embedding Hallucination for Few-Shot Language Fine-tuning
Yiren Jian
Chongyang Gao
Soroush Vosoughi
15
4
0
03 May 2022
Super-Prompting: Utilizing Model-Independent Contextual Data to Reduce
  Data Annotation Required in Visual Commonsense Tasks
Super-Prompting: Utilizing Model-Independent Contextual Data to Reduce Data Annotation Required in Visual Commonsense Tasks
Navid Rezaei
Marek Reformat
VLM
14
2
0
25 Apr 2022
Fusing finetuned models for better pretraining
Fusing finetuned models for better pretraining
Leshem Choshen
Elad Venezian
Noam Slonim
Yoav Katz
FedML
AI4CE
MoMe
31
86
0
06 Apr 2022
A sequence-to-sequence approach for document-level relation extraction
A sequence-to-sequence approach for document-level relation extraction
John Giorgi
Gary D. Bader
Bo Wang
15
52
0
03 Apr 2022
Can pre-trained Transformers be used in detecting complex sensitive
  sentences? -- A Monsanto case study
Can pre-trained Transformers be used in detecting complex sensitive sentences? -- A Monsanto case study
Roelien C. Timmer
David Liebowitz
Surya Nepal
S. Kanhere
14
8
0
14 Mar 2022
HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both
  Language and Vision-and-Language Tasks
HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both Language and Vision-and-Language Tasks
Zhengkun Zhang
Wenya Guo
Xiaojun Meng
Yasheng Wang
Yadao Wang
Xin Jiang
Qun Liu
Zhenglu Yang
26
15
0
08 Mar 2022
NoisyTune: A Little Noise Can Help You Finetune Pretrained Language
  Models Better
NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better
Chuhan Wu
Fangzhao Wu
Tao Qi
Yongfeng Huang
Xing Xie
17
58
0
24 Feb 2022
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Guanzheng Chen
Fangyu Liu
Zaiqiao Meng
Shangsong Liang
21
87
0
16 Feb 2022
Black-Box Tuning for Language-Model-as-a-Service
Black-Box Tuning for Language-Model-as-a-Service
Tianxiang Sun
Yunfan Shao
Hong Qian
Xuanjing Huang
Xipeng Qiu
VLM
39
255
0
10 Jan 2022
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial
  Robustness?
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?
Xinhsuai Dong
Anh Tuan Luu
Min-Bin Lin
Shuicheng Yan
Hanwang Zhang
SILM
AAML
16
55
0
22 Dec 2021
Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight
  Fine-Tuning
Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight Fine-Tuning
Euna Jung
Jaekeol Choi
Wonjong Rhee
12
13
0
28 Oct 2021
KNN-BERT: Fine-Tuning Pre-Trained Models with KNN Classifier
KNN-BERT: Fine-Tuning Pre-Trained Models with KNN Classifier
Linyang Li
Demin Song
Ruotian Ma
Xipeng Qiu
Xuanjing Huang
27
21
0
06 Oct 2021
Efficient Contrastive Learning via Novel Data Augmentation and
  Curriculum Learning
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning
Seonghyeon Ye
Jiseon Kim
Alice H. Oh
CLL
VLM
24
21
0
10 Sep 2021
PPT: Pre-trained Prompt Tuning for Few-shot Learning
PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu
Xu Han
Zhiyuan Liu
Minlie Huang
VLM
27
401
0
09 Sep 2021
Discrete and Soft Prompting for Multilingual Models
Discrete and Soft Prompting for Multilingual Models
Mengjie Zhao
Hinrich Schütze
LRM
8
71
0
08 Sep 2021
NSP-BERT: A Prompt-based Few-Shot Learner Through an Original
  Pre-training Task--Next Sentence Prediction
NSP-BERT: A Prompt-based Few-Shot Learner Through an Original Pre-training Task--Next Sentence Prediction
Yi Sun
Yu Zheng
Chao Hao
Hangping Qiu
VLM
25
36
0
08 Sep 2021
How Does Adversarial Fine-Tuning Benefit BERT?
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
11
4
0
31 Aug 2021
Rethinking Why Intermediate-Task Fine-Tuning Works
Rethinking Why Intermediate-Task Fine-Tuning Works
Ting-Yun Chang
Chi-Jen Lu
LRM
19
29
0
26 Aug 2021
FewCLUE: A Chinese Few-shot Learning Evaluation Benchmark
FewCLUE: A Chinese Few-shot Learning Evaluation Benchmark
Liang Xu
Xiaojing Lu
Chenyang Yuan
Xuanwei Zhang
Huilin Xu
...
Guoao Wei
X. Pan
Xin Tian
Libo Qin
Hai Hu
ELM
13
56
0
15 Jul 2021
Layer-wise Analysis of a Self-supervised Speech Representation Model
Layer-wise Analysis of a Self-supervised Speech Representation Model
Ankita Pasad
Ju-Chieh Chou
Karen Livescu
SSL
17
287
0
10 Jul 2021
The MultiBERTs: BERT Reproductions for Robustness Analysis
The MultiBERTs: BERT Reproductions for Robustness Analysis
Thibault Sellam
Steve Yadlowsky
Jason W. Wei
Naomi Saphra
Alexander DÁmour
...
Iulia Turc
Jacob Eisenstein
Dipanjan Das
Ian Tenney
Ellie Pavlick
20
92
0
30 Jun 2021
A Closer Look at How Fine-tuning Changes BERT
A Closer Look at How Fine-tuning Changes BERT
Yichu Zhou
Vivek Srikumar
19
63
0
27 Jun 2021
Entailment as Few-Shot Learner
Entailment as Few-Shot Learner
Sinong Wang
Han Fang
Madian Khabsa
Hanzi Mao
Hao Ma
28
183
0
29 Apr 2021
12
Next