Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.15724
Cited By
Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
24 October 2023
Chaojun Xiao
Yuqi Luo
Wenbin Zhang
Pengle Zhang
Xu Han
Yankai Lin
Zhengyan Zhang
Ruobing Xie
Zhiyuan Liu
Maosong Sun
Jie Zhou
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules"
4 / 4 papers shown
Title
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,835
0
18 Apr 2021
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
148
345
0
23 Jul 2020
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
241
1,444
0
18 Mar 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1