Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.11431
Cited By
Task-guided Disentangled Tuning for Pretrained Language Models
22 March 2022
Jiali Zeng
Yu Jiang
Shuangzhi Wu
Yongjing Yin
Mu Li
DRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Task-guided Disentangled Tuning for Pretrained Language Models"
6 / 6 papers shown
Title
Disentangled Representation Learning
Xin Eric Wang
Hong Chen
Siao Tang
Zihao Wu
Wenwu Zhu
DRL
22
77
0
21 Nov 2022
Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Runxin Xu
Fuli Luo
Zhiyuan Zhang
Chuanqi Tan
Baobao Chang
Songfang Huang
Fei Huang
LRM
136
178
0
13 Sep 2021
CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding
Yanru Qu
Dinghan Shen
Yelong Shen
Sandra Sajeev
Jiawei Han
Weizhu Chen
127
66
0
16 Oct 2020
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Chen Zhu
Yu Cheng
Zhe Gan
S. Sun
Tom Goldstein
Jingjing Liu
AAML
221
436
0
25 Sep 2019
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
235
205
0
25 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1