Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2312.04032
Cited By
RoAST: Robustifying Language Models via Adversarial Perturbation with Selective Training
7 December 2023
Jaehyung Kim
Yuning Mao
Rui Hou
Hanchao Yu
Davis Liang
Pascale Fung
Qifan Wang
Fuli Feng
Lifu Huang
Madian Khabsa
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"RoAST: Robustifying Language Models via Adversarial Perturbation with Selective Training"
7 / 7 papers shown
Title
Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection
Rheeya Uppaal
Junjie Hu
Yixuan Li
OODD
117
33
0
22 May 2023
Sharpness-Aware Minimization Improves Language Model Generalization
Dara Bahri
H. Mobahi
Yi Tay
119
97
0
16 Oct 2021
Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Runxin Xu
Fuli Luo
Zhiyuan Zhang
Chuanqi Tan
Baobao Chang
Songfang Huang
Fei Huang
LRM
136
178
0
13 Sep 2021
DynaSent: A Dynamic Benchmark for Sentiment Analysis
Christopher Potts
Zhengxuan Wu
Atticus Geiger
Douwe Kiela
227
76
0
30 Dec 2020
Calibration of Pre-trained Transformers
Shrey Desai
Greg Durrett
UQLM
243
289
0
17 Mar 2020
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Chen Zhu
Yu Cheng
Zhe Gan
S. Sun
Tom Goldstein
Jingjing Liu
AAML
221
436
0
25 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1