Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.06384
Cited By
GMP*: Well-Tuned Gradual Magnitude Pruning Can Outperform Most BERT-Pruning Methods
12 October 2022
Eldar Kurtic
Dan Alistarh
AI4MH
Re-assign community
ArXiv
PDF
HTML
Papers citing
"GMP*: Well-Tuned Gradual Magnitude Pruning Can Outperform Most BERT-Pruning Methods"
4 / 4 papers shown
Title
Beyond Perplexity: Multi-dimensional Safety Evaluation of LLM Compression
Zhichao Xu
Ashim Gupta
Tao Li
Oliver Bentham
Vivek Srikumar
40
8
0
06 Jul 2024
oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Daniel Fernando Campos
Alexandre Marques
Mark Kurtz
Chengxiang Zhai
VLM
AAML
11
2
0
30 Mar 2023
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
139
684
0
31 Jan 2021
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
148
345
0
23 Jul 2020
1