ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00585
  4. Cited By
Proving the Lottery Ticket Hypothesis: Pruning is All You Need

Proving the Lottery Ticket Hypothesis: Pruning is All You Need

3 February 2020
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
ArXivPDFHTML

Papers citing "Proving the Lottery Ticket Hypothesis: Pruning is All You Need"

50 / 179 papers shown
Title
Convolutional and Residual Networks Provably Contain Lottery Tickets
Convolutional and Residual Networks Provably Contain Lottery Tickets
R. Burkholz
UQCV
MLT
33
13
0
04 May 2022
Most Activation Functions Can Win the Lottery Without Excessive Depth
Most Activation Functions Can Win the Lottery Without Excessive Depth
R. Burkholz
MLT
61
18
0
04 May 2022
MIME: Adapting a Single Neural Network for Multi-task Inference with
  Memory-efficient Dynamic Pruning
MIME: Adapting a Single Neural Network for Multi-task Inference with Memory-efficient Dynamic Pruning
Abhiroop Bhattacharjee
Yeshwanth Venkatesha
Abhishek Moitra
Priyadarshini Panda
8
6
0
11 Apr 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and
  Structured Sparsification
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
26
11
0
06 Apr 2022
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
Hongru Yang
Zhangyang Wang
MLT
27
8
0
27 Mar 2022
Playing Lottery Tickets in Style Transfer Models
Playing Lottery Tickets in Style Transfer Models
Meihao Kong
Jing Huo
Wenbin Li
Jing Wu
Yu-Kun Lai
Yang Gao
11
1
0
25 Mar 2022
Interspace Pruning: Using Adaptive Filter Representations to Improve
  Training of Sparse CNNs
Interspace Pruning: Using Adaptive Filter Representations to Improve Training of Sparse CNNs
Paul Wimmer
Jens Mehnert
A. P. Condurache
CVBM
12
20
0
15 Mar 2022
Exploiting Low-Rank Tensor-Train Deep Neural Networks Based on
  Riemannian Gradient Descent With Illustrations of Speech Processing
Exploiting Low-Rank Tensor-Train Deep Neural Networks Based on Riemannian Gradient Descent With Illustrations of Speech Processing
Jun Qi
Chao-Han Huck Yang
Pin-Yu Chen
Javier Tejedor
25
16
0
11 Mar 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another
  in Neural Networks
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
34
48
0
09 Mar 2022
Provable and Efficient Continual Representation Learning
Provable and Efficient Continual Representation Learning
Yingcong Li
Mingchen Li
M. Salman Asif
Samet Oymak
CLL
30
11
0
03 Mar 2022
Extracting Effective Subnetworks with Gumbel-Softmax
Extracting Effective Subnetworks with Gumbel-Softmax
Robin Dupont
M. Alaoui
H. Sahbi
A. Lebois
6
6
0
25 Feb 2022
The rise of the lottery heroes: why zero-shot pruning is hard
The rise of the lottery heroes: why zero-shot pruning is hard
Enzo Tartaglione
21
6
0
24 Feb 2022
Rare Gems: Finding Lottery Tickets at Initialization
Rare Gems: Finding Lottery Tickets at Initialization
Kartik K. Sreenivasan
Jy-yong Sohn
Liu Yang
Matthew Grinde
Alliot Nagle
Hongyi Wang
Eric P. Xing
Kangwook Lee
Dimitris Papailiopoulos
8
42
0
24 Feb 2022
Bit-wise Training of Neural Network Weights
Bit-wise Training of Neural Network Weights
Cristian Ivan
MQ
16
0
0
19 Feb 2022
DataMUX: Data Multiplexing for Neural Networks
DataMUX: Data Multiplexing for Neural Networks
Vishvak Murahari
Carlos E. Jimenez
Runzhe Yang
Karthik Narasimhan
MoE
21
17
0
18 Feb 2022
A Study of Designing Compact Audio-Visual Wake Word Spotting System
  Based on Iterative Fine-Tuning in Neural Network Pruning
A Study of Designing Compact Audio-Visual Wake Word Spotting System Based on Iterative Fine-Tuning in Neural Network Pruning
Hengshun Zhou
Jun Du
Chao-Han Huck Yang
Shifu Xiong
Chin-Hui Lee
VLM
13
3
0
17 Feb 2022
Evolving Neural Networks with Optimal Balance between Information Flow
  and Connections Cost
Evolving Neural Networks with Optimal Balance between Information Flow and Connections Cost
A. Khalili
A. Bouchachia
9
0
0
12 Feb 2022
On The Energy Statistics of Feature Maps in Pruning of Neural Networks
  with Skip-Connections
On The Energy Statistics of Feature Maps in Pruning of Neural Networks with Skip-Connections
Mohammadreza Soltani
Suya Wu
Yuerong Li
Jie Ding
Vahid Tarokh
3DPC
23
0
0
26 Jan 2022
Examining and Mitigating the Impact of Crossbar Non-idealities for
  Accurate Implementation of Sparse Deep Neural Networks
Examining and Mitigating the Impact of Crossbar Non-idealities for Accurate Implementation of Sparse Deep Neural Networks
Abhiroop Bhattacharjee
Lakshya Bhatnagar
Priyadarshini Panda
11
11
0
13 Jan 2022
Exploiting Hybrid Models of Tensor-Train Networks for Spoken Command
  Recognition
Exploiting Hybrid Models of Tensor-Train Networks for Spoken Command Recognition
Jun Qi
Javier Tejedor
22
4
0
11 Jan 2022
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
Yuege Xie
Bobby Shi
Hayden Schaeffer
Rachel A. Ward
72
9
0
07 Dec 2021
i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery
i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery
Cameron R. Wolfe
Anastasios Kyrillidis
9
1
0
07 Dec 2021
Equal Bits: Enforcing Equally Distributed Binary Network Weights
Equal Bits: Enforcing Equally Distributed Binary Network Weights
Yun-qiang Li
S. Pintea
J. C. V. Gemert
MQ
38
15
0
02 Dec 2021
Pixelated Butterfly: Simple and Efficient Sparse training for Neural
  Network Models
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Tri Dao
Beidi Chen
Kaizhao Liang
Jiaming Yang
Zhao-quan Song
Atri Rudra
Christopher Ré
25
75
0
30 Nov 2021
Plant ñ' Seek: Can You Find the Winning Ticket?
Plant ñ' Seek: Can You Find the Winning Ticket?
Jonas Fischer
R. Burkholz
9
21
0
22 Nov 2021
On the Existence of Universal Lottery Tickets
On the Existence of Universal Lottery Tickets
R. Burkholz
Nilanjana Laha
Rajarshi Mukherjee
Alkis Gotovos
UQCV
6
32
0
22 Nov 2021
Learning Pruned Structure and Weights Simultaneously from Scratch: an
  Attention based Approach
Learning Pruned Structure and Weights Simultaneously from Scratch: an Attention based Approach
Qisheng He
Weisong Shi
Ming Dong
12
3
0
01 Nov 2021
RGP: Neural Network Pruning through Its Regular Graph Structure
RGP: Neural Network Pruning through Its Regular Graph Structure
Zhuangzhi Chen
Jingyang Xiang
Yao Lu
Qi Xuan
Xiaoniu Yang
23
1
0
28 Oct 2021
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Yonggan Fu
Qixuan Yu
Yang Zhang
Shan-Hung Wu
Ouyang Xu
David D. Cox
Yingyan Lin
AAML
OOD
25
29
0
26 Oct 2021
Lottery Tickets with Nonzero Biases
Lottery Tickets with Nonzero Biases
Jonas Fischer
Advait Gadhikar
R. Burkholz
6
6
0
21 Oct 2021
Finding Everything within Random Binary Networks
Finding Everything within Random Binary Networks
Kartik K. Sreenivasan
Shashank Rajput
Jy-yong Sohn
Dimitris Papailiopoulos
9
10
0
18 Oct 2021
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based
  Networks
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based Networks
Shiyu Liu
Chong Min John Tan
Mehul Motani
CLL
21
4
0
17 Oct 2021
Composable Sparse Fine-Tuning for Cross-Lingual Transfer
Composable Sparse Fine-Tuning for Cross-Lingual Transfer
Alan Ansell
E. Ponti
Anna Korhonen
Ivan Vulić
CLL
MoE
26
132
0
14 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
  on Pruned Neural Networks
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
13
13
0
12 Oct 2021
Efficient Visual Recognition with Deep Neural Networks: A Survey on
  Recent Advances and New Directions
Efficient Visual Recognition with Deep Neural Networks: A Survey on Recent Advances and New Directions
Yang Wu
Dingheng Wang
Xiaotong Lu
Fan Yang
Guoqi Li
W. Dong
Jianbo Shi
27
18
0
30 Aug 2021
Membership Inference Attacks on Lottery Ticket Networks
Membership Inference Attacks on Lottery Ticket Networks
Aadesh Bagmar
Shishira R. Maiya
Shruti Bidwalka
Amol Deshpande
MIACV
47
5
0
07 Aug 2021
Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting
  Spatio-Temporal Sparsity
Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-Temporal Sparsity
Chang Gao
T. Delbruck
Shih-Chii Liu
12
44
0
04 Aug 2021
How much pre-training is enough to discover a good subnetwork?
How much pre-training is enough to discover a good subnetwork?
Cameron R. Wolfe
Fangshuo Liao
Qihan Wang
J. Kim
Anastasios Kyrillidis
22
3
0
31 Jul 2021
A Lottery Ticket Hypothesis Framework for Low-Complexity Device-Robust
  Neural Acoustic Scene Classification
A Lottery Ticket Hypothesis Framework for Low-Complexity Device-Robust Neural Acoustic Scene Classification
Hao Yen
Chao-Han Huck Yang
Hu Hu
Sabato Marco Siniscalchi
Qing Wang
...
Yuanjun Zhao
Yuzhong Wu
Yannan Wang
Jun Du
Chin-Hui Lee
11
16
0
03 Jul 2021
Pruning Randomly Initialized Neural Networks with Iterative
  Randomization
Pruning Randomly Initialized Neural Networks with Iterative Randomization
Daiki Chijiwa
Shinýa Yamaguchi
Yasutoshi Ida
Kenji Umakoshi
T. Inoue
6
23
0
17 Jun 2021
A Random CNN Sees Objects: One Inductive Bias of CNN and Its
  Applications
A Random CNN Sees Objects: One Inductive Bias of CNN and Its Applications
Yun Cao
Jianxin Wu
SSL
15
26
0
17 Jun 2021
PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition
PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition
Cheng-I Jeff Lai
Yang Zhang
Alexander H. Liu
Shiyu Chang
Yi-Lun Liao
Yung-Sung Chuang
Kaizhi Qian
Sameer Khurana
David D. Cox
James R. Glass
VLM
49
70
0
10 Jun 2021
GANs Can Play Lottery Tickets Too
GANs Can Play Lottery Tickets Too
Xuxi Chen
Zhenyu (Allen) Zhang
Yongduo Sui
Tianlong Chen
GAN
11
58
0
31 May 2021
A Probabilistic Approach to Neural Network Pruning
A Probabilistic Approach to Neural Network Pruning
Xin-Yao Qian
Diego Klabjan
21
16
0
20 May 2021
Model Pruning Based on Quantified Similarity of Feature Maps
Model Pruning Based on Quantified Similarity of Feature Maps
Zidu Wang
Xue-jun Liu
Long Huang
Yuxiang Chen
Yufei Zhang
Zhikang Lin
Rui Wang
15
16
0
13 May 2021
Playing Lottery Tickets with Vision and Language
Playing Lottery Tickets with Vision and Language
Zhe Gan
Yen-Chun Chen
Linjie Li
Tianlong Chen
Yu Cheng
Shuohang Wang
Jingjing Liu
Lijuan Wang
Zicheng Liu
VLM
101
53
0
23 Apr 2021
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural
  Networks by Pruning A Randomly Weighted Network
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
James Diffenderfer
B. Kailkhura
MQ
18
75
0
17 Mar 2021
Recent Advances on Neural Network Pruning at Initialization
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
31
64
0
11 Mar 2021
MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks
MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks
Alexandre Ramé
Rémy Sun
Matthieu Cord
UQCV
35
60
0
10 Mar 2021
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
Ning Liu
Geng Yuan
Zhengping Che
Xuan Shen
Xiaolong Ma
Qing Jin
Jian Ren
Jian Tang
Sijia Liu
Yanzhi Wang
26
30
0
19 Feb 2021
Previous
1234
Next