Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.03044
Cited By
Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
6 October 2022
Mansheej Paul
F. Chen
Brett W. Larsen
Jonathan Frankle
Surya Ganguli
Gintare Karolina Dziugaite
UQCV
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?"
30 / 30 papers shown
Title
Sparse Training from Random Initialization: Aligning Lottery Ticket Masks using Weight Symmetry
Mohammed Adnan
Rohan Jain
Ekansh Sharma
Rahul Krishnan
Yani Andrew Ioannou
54
0
0
08 May 2025
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Klemen Kotar
Greta Tuckute
49
0
0
29 Apr 2025
Sign-In to the Lottery: Reparameterizing Sparse Training From Scratch
Advait Gadhikar
Tom Jacobs
Chao Zhou
R. Burkholz
27
0
0
17 Apr 2025
Approaching Deep Learning through the Spectral Dynamics of Weights
David Yunis
Kumar Kshitij Patel
Samuel Wheeler
Pedro H. P. Savarese
Gal Vardi
Karen Livescu
Michael Maire
Matthew R. Walter
47
3
0
21 Aug 2024
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
40
3
0
19 Aug 2024
Pruning Large Language Models with Semi-Structural Adaptive Sparse Training
Weiyu Huang
Yuezhou Hu
Guohao Jian
Jun Zhu
Jianfei Chen
30
5
0
30 Jul 2024
Evaluating Zero-Shot Long-Context LLM Compression
Chenyu Wang
Yihan Wang
Kai Li
49
0
0
10 Jun 2024
Optimal Eye Surgeon: Finding Image Priors through Sparse Generators at Initialization
Avrajit Ghosh
Xitong Zhang
Kenneth K. Sun
Qing Qu
S. Ravishankar
Rongrong Wang
MedIm
30
5
0
07 Jun 2024
Cyclic Sparse Training: Is it Enough?
Advait Gadhikar
Sree Harsha Nelaturu
R. Burkholz
CLL
40
0
0
04 Jun 2024
Simultaneous linear connectivity of neural networks modulo permutation
Ekansh Sharma
Devin Kwok
Tom Denton
Daniel M. Roy
David Rolnick
Gintare Karolina Dziugaite
182
7
0
09 Apr 2024
Insights into the Lottery Ticket Hypothesis and Iterative Magnitude Pruning
Tausifa Jan Saleem
Ramanjit Ahuja
Surendra Prasad
Brejesh Lall
18
0
0
22 Mar 2024
A Survey of Lottery Ticket Hypothesis
Bohan Liu
Zijie Zhang
Peixiong He
Zhensen Wang
Yang Xiao
Ruimeng Ye
Yang Zhou
Wei-Shinn Ku
Bo Hui
UQCV
34
12
0
07 Mar 2024
Masks, Signs, And Learning Rate Rewinding
Advait Gadhikar
R. Burkholz
55
8
0
29 Feb 2024
No Free Prune: Information-Theoretic Barriers to Pruning at Initialization
Tanishq Kumar
Kevin Luo
Mark Sellke
33
3
0
02 Feb 2024
Faster and Accurate Neural Networks with Semantic Inference
Sazzad Sayyed
Jonathan D. Ashdown
Francesco Restuccia
21
0
0
02 Oct 2023
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
Samuel Horváth
Stefanos Laskaridis
Shashank Rajput
Hongyi Wang
BDL
26
4
0
28 Aug 2023
Iterative Magnitude Pruning as a Renormalisation Group: A Study in The Context of The Lottery Ticket Hypothesis
Abu-Al Hassan
25
0
0
06 Aug 2023
Distilled Pruning: Using Synthetic Data to Win the Lottery
Luke McDermott
Daniel Cummings
SyDa
DD
30
1
0
07 Jul 2023
Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging
Max Zimmer
Christoph Spiegel
S. Pokutta
MoMe
41
14
0
29 Jun 2023
Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression
Allan Raventós
Mansheej Paul
F. Chen
Surya Ganguli
19
70
0
26 Jun 2023
Quantifying lottery tickets under label noise: accuracy, calibration, and complexity
V. Arora
Daniele Irto
Sebastian Goldt
G. Sanguinetti
29
2
0
21 Jun 2023
A Simple and Effective Pruning Approach for Large Language Models
Mingjie Sun
Zhuang Liu
Anna Bair
J. Zico Kolter
50
353
0
20 Jun 2023
How Sparse Can We Prune A Deep Network: A Fundamental Limit Viewpoint
Qiaozhe Zhang
Rui-qi Zhang
Jun Sun
Yingzhuang Liu
16
0
0
09 Jun 2023
Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning
Moonseok Choi
Hyungi Lee
G. Nam
Juho Lee
19
2
0
24 May 2023
Break It Down: Evidence for Structural Compositionality in Neural Networks
Michael A. Lepori
Thomas Serre
Ellie Pavlick
33
29
0
26 Jan 2023
CAP: Correlation-Aware Pruning for Highly-Accurate Sparse Vision Models
Denis Kuznedelev
Eldar Kurtic
Elias Frantar
Dan Alistarh
VLM
ViT
8
11
0
14 Oct 2022
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models
Clara Na
Sanket Vaibhav Mehta
Emma Strubell
62
19
0
25 May 2022
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
178
1,027
0
06 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
222
382
0
05 Mar 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,453
0
23 Jan 2020
1