Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.04284
Cited By
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters
9 October 2022
Shwai He
Liang Ding
Daize Dong
Miao Zhang
Dacheng Tao
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters"
13 / 63 papers shown
Title
Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking
Qingyue Wang
Liang Ding
Yanan Cao
Yibing Zhan
Zheng Lin
Shi Wang
Dacheng Tao
Li Guo
MoMe
MoE
42
12
0
01 Jun 2023
Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey
Chen Ling
Xujiang Zhao
Jiaying Lu
Chengyuan Deng
Can Zheng
...
Chris White
Quanquan Gu
Jian Pei
Carl Yang
Liang Zhao
ALM
33
126
0
30 May 2023
One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning
Guangtao Zeng
Peiyuan Zhang
Wei Lu
23
21
0
28 May 2023
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models
Zhiqiang Hu
Lei Wang
Yihuai Lan
Wanyu Xu
Ee-Peng Lim
Lidong Bing
Xing Xu
Soujanya Poria
Roy Ka-wei Lee
ALM
51
238
0
04 Apr 2023
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Vladislav Lialin
Vijeta Deshpande
Anna Rumshisky
45
167
0
28 Mar 2023
OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Chao Xue
Wen Liu
Shunxing Xie
Zhenfang Wang
Jiaxing Li
...
Shi-Yong Chen
Yibing Zhan
Jing Zhang
Chaoyue Wang
Dacheng Tao
52
2
0
01 Mar 2023
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE
Qihuang Zhong
Liang Ding
Keqin Peng
Juhua Liu
Bo Du
Li Shen
Yibing Zhan
Dacheng Tao
VLM
42
13
0
18 Feb 2023
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
Qihuang Zhong
Liang Ding
Yibing Zhan
Yu Qiao
Yonggang Wen
...
Yixin Chen
Xinbo Gao
Steven C. H. Hoi
Xiaoou Tang
Dacheng Tao
VLM
ELM
60
34
0
04 Dec 2022
Parameter-Efficient and Student-Friendly Knowledge Distillation
Jun Rao
Xv Meng
Liang Ding
Shuhan Qi
Dacheng Tao
42
46
0
28 May 2022
SD-Conv: Towards the Parameter-Efficiency of Dynamic Convolution
Shwai He
Chenbo Jiang
Daize Dong
Liang Ding
44
5
0
05 Apr 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
283
3,879
0
18 Apr 2021
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
249
208
0
25 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
304
7,005
0
20 Apr 2018
Previous
1
2