Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2112.07198
Cited By
From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
14 December 2021
Runxin Xu
Fuli Luo
Chengyu Wang
Baobao Chang
Jun Huang
Songfang Huang
Fei Huang
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression"
6 / 6 papers shown
Title
Adaptive Hypergraph Network for Trust Prediction
Rongwei Xu
Guanfeng Liu
Yan Wang
Xuyun Zhang
Kai Zheng
Xiaofang Zhou
19
0
0
07 Feb 2024
COST-EFF: Collaborative Optimization of Spatial and Temporal Efficiency with Slenderized Multi-exit Language Models
Bowen Shen
Zheng Lin
Yuanxin Liu
Zhengxiao Liu
Lei Wang
Weiping Wang
VLM
33
4
0
27 Oct 2022
Multi-CLS BERT: An Efficient Alternative to Traditional Ensembling
Haw-Shiuan Chang
Ruei-Yao Sun
Kathryn Ricci
Andrew McCallum
37
14
0
10 Oct 2022
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
235
205
0
25 Sep 2019
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT
Sheng Shen
Zhen Dong
Jiayu Ye
Linjian Ma
Z. Yao
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
225
574
0
12 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
1