Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.00259
Cited By
How much pre-training is enough to discover a good subnetwork?
31 July 2021
Cameron R. Wolfe
Fangshuo Liao
Qihan Wang
J. Kim
Anastasios Kyrillidis
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How much pre-training is enough to discover a good subnetwork?"
9 / 9 papers shown
Title
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
Max Zimmer
Megi Andoni
Christoph Spiegel
S. Pokutta
VLM
50
10
0
23 Dec 2023
Strong Lottery Ticket Hypothesis with
ε
\varepsilon
ε
--perturbation
Zheyang Xiong
Fangshuo Liao
Anastasios Kyrillidis
29
2
0
29 Oct 2022
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
124
671
0
24 Jan 2021
On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths
Quynh N. Nguyen
33
49
0
24 Jan 2021
The Lottery Ticket Hypothesis for Object Recognition
Sharath Girish
Shishira R. Maiya
Kamal Gupta
Hao Chen
L. Davis
Abhinav Shrivastava
75
60
0
08 Dec 2020
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
148
345
0
23 Jul 2020
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
183
1,027
0
06 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
222
382
0
05 Mar 2020
Decentralized Frank-Wolfe Algorithm for Convex and Non-convex Problems
Hoi-To Wai
Jean Lafond
Anna Scaglione
Eric Moulines
67
89
0
05 Dec 2016
1