ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00585
  4. Cited By
Proving the Lottery Ticket Hypothesis: Pruning is All You Need

Proving the Lottery Ticket Hypothesis: Pruning is All You Need

3 February 2020
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
ArXivPDFHTML

Papers citing "Proving the Lottery Ticket Hypothesis: Pruning is All You Need"

50 / 179 papers shown
Title
Sparse Binary Transformers for Multivariate Time Series Modeling
Sparse Binary Transformers for Multivariate Time Series Modeling
Matt Gorbett
Hossein Shirazi
I. Ray
AI4TS
25
13
0
09 Aug 2023
Quantifying lottery tickets under label noise: accuracy, calibration,
  and complexity
Quantifying lottery tickets under label noise: accuracy, calibration, and complexity
V. Arora
Daniele Irto
Sebastian Goldt
G. Sanguinetti
34
2
0
21 Jun 2023
Representation and decomposition of functions in DAG-DNNs and structural
  network pruning
Representation and decomposition of functions in DAG-DNNs and structural network pruning
Wonjun Hwang
8
1
0
16 Jun 2023
Implicit Compressibility of Overparametrized Neural Networks Trained
  with Heavy-Tailed SGD
Implicit Compressibility of Overparametrized Neural Networks Trained with Heavy-Tailed SGD
Yijun Wan
Melih Barsbey
A. Zaidi
Umut Simsekli
22
1
0
13 Jun 2023
Biologically-Motivated Learning Model for Instructed Visual Processing
Biologically-Motivated Learning Model for Instructed Visual Processing
R. Abel
S. Ullman
20
0
0
04 Jun 2023
Generalization Bounds for Magnitude-Based Pruning via Sparse Matrix
  Sketching
Generalization Bounds for Magnitude-Based Pruning via Sparse Matrix Sketching
E. Guha
Prasanjit Dubey
X. Huo
MLT
32
1
0
30 May 2023
Evolving Connectivity for Recurrent Spiking Neural Networks
Evolving Connectivity for Recurrent Spiking Neural Networks
Guan-Bo Wang
Yuhao Sun
Sijie Cheng
Sen Song
6
5
0
28 May 2023
Pruning at Initialization -- A Sketching Perspective
Pruning at Initialization -- A Sketching Perspective
Noga Bar
Raja Giryes
16
1
0
27 May 2023
Understanding Sparse Neural Networks from their Topology via
  Multipartite Graph Representations
Understanding Sparse Neural Networks from their Topology via Multipartite Graph Representations
Elia Cunegatti
Matteo Farina
Doina Bucur
Giovanni Iacca
35
1
0
26 May 2023
Learning to Act through Evolution of Neural Diversity in Random Neural
  Networks
Learning to Act through Evolution of Neural Diversity in Random Neural Networks
J. Pedersen
S. Risi
23
2
0
25 May 2023
Probabilistic Modeling: Proving the Lottery Ticket Hypothesis in Spiking
  Neural Network
Probabilistic Modeling: Proving the Lottery Ticket Hypothesis in Spiking Neural Network
Man Yao
Yu-Liang Chou
Guangshe Zhao
Xiawu Zheng
Yonghong Tian
Boxing Xu
Guoqi Li
36
4
0
20 May 2023
Rethinking Graph Lottery Tickets: Graph Sparsity Matters
Rethinking Graph Lottery Tickets: Graph Sparsity Matters
Bo Hui
Jocelyn M Mora
Adrian V. Dalca
I. Aganj
37
22
0
03 May 2023
Randomly Initialized Subnetworks with Iterative Weight Recycling
Randomly Initialized Subnetworks with Iterative Weight Recycling
Matt Gorbett
L. D. Whitley
15
4
0
28 Mar 2023
ExplainFix: Explainable Spatially Fixed Deep Networks
ExplainFix: Explainable Spatially Fixed Deep Networks
Alex Gaudio
Christos Faloutsos
A. Smailagic
P. Costa
A. Campilho
FAtt
16
3
0
18 Mar 2023
DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural
  Network Worry-Free?
DSD2^22: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
24
7
0
02 Mar 2023
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Benjamin Vandersmissen
José Oramas
15
1
0
22 Feb 2023
Workload-Balanced Pruning for Sparse Spiking Neural Networks
Workload-Balanced Pruning for Sparse Spiking Neural Networks
Ruokai Yin
Youngeun Kim
Yuhang Li
Abhishek Moitra
Nitin Satpute
Anna Hambitzer
Priyadarshini Panda
23
18
0
13 Feb 2023
Quantum Neuron Selection: Finding High Performing Subnetworks With
  Quantum Algorithms
Quantum Neuron Selection: Finding High Performing Subnetworks With Quantum Algorithms
Tim Whitaker
17
1
0
12 Feb 2023
Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural
  Networks
Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks
Shuai Zhang
M. Wang
Pin-Yu Chen
Sijia Liu
Songtao Lu
Miaoyuan Liu
MLT
19
16
0
06 Feb 2023
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks
  with Quantum Computation
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation
H. Yamasaki
Sathyawageeswar Subramanian
Satoshi Hayakawa
Sho Sonoda
MLT
30
4
0
27 Jan 2023
Voting from Nearest Tasks: Meta-Vote Pruning of Pre-trained Models for
  Downstream Tasks
Voting from Nearest Tasks: Meta-Vote Pruning of Pre-trained Models for Downstream Tasks
Haiyan Zhao
Tianyi Zhou
Guodong Long
Jing Jiang
Chengqi Zhang
27
0
0
27 Jan 2023
Pruning Before Training May Improve Generalization, Provably
Pruning Before Training May Improve Generalization, Provably
Hongru Yang
Yingbin Liang
Xiaojie Guo
Lingfei Wu
Zhangyang Wang
MLT
19
1
0
01 Jan 2023
Publishing Efficient On-device Models Increases Adversarial
  Vulnerability
Publishing Efficient On-device Models Increases Adversarial Vulnerability
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
AAML
30
2
0
28 Dec 2022
AP: Selective Activation for De-sparsifying Pruned Neural Networks
AP: Selective Activation for De-sparsifying Pruned Neural Networks
Shiyu Liu
Rohan Ghosh
Dylan Tan
Mehul Motani
AAML
8
0
0
09 Dec 2022
Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural
  Networks
Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Shiyu Liu
Rohan Ghosh
John Tan Chong Min
Mehul Motani
27
0
0
09 Dec 2022
LU decomposition and Toeplitz decomposition of a neural network
LU decomposition and Toeplitz decomposition of a neural network
Yucong Liu
Simiao Jiao
Lek-Heng Lim
30
7
0
25 Nov 2022
Finding Skill Neurons in Pre-trained Transformer-based Language Models
Finding Skill Neurons in Pre-trained Transformer-based Language Models
Xiaozhi Wang
Kaiyue Wen
Zhengyan Zhang
Lei Hou
Zhiyuan Liu
Juanzi Li
MILM
MoE
19
50
0
14 Nov 2022
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Yonggan Fu
Yang Zhang
Kaizhi Qian
Zhifan Ye
Zhongzhi Yu
Cheng-I Jeff Lai
Yingyan Lin
22
8
0
02 Nov 2022
Strong Lottery Ticket Hypothesis with $\varepsilon$--perturbation
Strong Lottery Ticket Hypothesis with ε\varepsilonε--perturbation
Zheyang Xiong
Fangshuo Liao
Anastasios Kyrillidis
29
2
0
29 Oct 2022
LOFT: Finding Lottery Tickets through Filter-wise Training
LOFT: Finding Lottery Tickets through Filter-wise Training
Qihan Wang
Chen Dun
Fangshuo Liao
C. Jermaine
Anastasios Kyrillidis
18
3
0
28 Oct 2022
Approximating Continuous Convolutions for Deep Network Compression
Approximating Continuous Convolutions for Deep Network Compression
Theo W. Costain
V. Prisacariu
23
0
0
17 Oct 2022
Parameter-Efficient Masking Networks
Parameter-Efficient Masking Networks
Yue Bai
Huan Wang
Xu Ma
Yitian Zhang
Zhiqiang Tao
Yun Fu
11
10
0
13 Oct 2022
Why Random Pruning Is All We Need to Start Sparse
Why Random Pruning Is All We Need to Start Sparse
Advait Gadhikar
Sohom Mukherjee
R. Burkholz
41
19
0
05 Oct 2022
Neural Network Panning: Screening the Optimal Sparse Network Before
  Training
Neural Network Panning: Screening the Optimal Sparse Network Before Training
Xiatao Kang
P. Li
Jiayi Yao
Chengxi Li
VLM
16
1
0
27 Sep 2022
Random Fourier Features for Asymmetric Kernels
Random Fourier Features for Asymmetric Kernels
Ming-qian He
Fan He
Fanghui Liu
Xiaolin Huang
20
3
0
18 Sep 2022
Robustness in deep learning: The good (width), the bad (depth), and the
  ugly (initialization)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
39
19
0
15 Sep 2022
Generalization Properties of NAS under Activation and Skip Connection
  Search
Generalization Properties of NAS under Activation and Skip Connection Search
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
AI4CE
8
14
0
15 Sep 2022
One-shot Network Pruning at Initialization with Discriminative Image
  Patches
One-shot Network Pruning at Initialization with Discriminative Image Patches
Yinan Yang
Yu Wang
Yi Ji
Heng Qi
Jien Kato
VLM
26
4
0
13 Sep 2022
Controlled Sparsity via Constrained Optimization or: How I Learned to
  Stop Tuning Penalties and Love Constraints
Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints
Jose Gallego-Posada
Juan Ramirez
Akram Erraqabi
Yoshua Bengio
Simon Lacoste-Julien
20
16
0
08 Aug 2022
To update or not to update? Neurons at equilibrium in deep models
To update or not to update? Neurons at equilibrium in deep models
Andrea Bragagnolo
Enzo Tartaglione
Marco Grangetto
22
10
0
19 Jul 2022
The Lottery Ticket Hypothesis for Self-attention in Convolutional Neural
  Network
The Lottery Ticket Hypothesis for Self-attention in Convolutional Neural Network
Zhongzhan Huang
Senwei Liang
Mingfu Liang
Wei He
Haizhao Yang
Liang Lin
19
9
0
16 Jul 2022
PRANC: Pseudo RAndom Networks for Compacting deep models
PRANC: Pseudo RAndom Networks for Compacting deep models
Parsa Nooralinejad
Ali Abbasi
Soroush Abbasi Koohpayegani
Kossar Pourahmadi Meibodi
Rana Muhammad Shahroz Khan
Soheil Kolouri
Hamed Pirsiavash
DD
24
0
0
16 Jun 2022
Embarrassingly Parallel Independent Training of Multi-Layer Perceptrons
  with Heterogeneous Architectures
Embarrassingly Parallel Independent Training of Multi-Layer Perceptrons with Heterogeneous Architectures
F. Farias
Teresa B Ludermir
C. B. Filho
10
2
0
14 Jun 2022
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning
Sanghoon Myung
I. Huh
Wonik Jang
Jae Myung Choe
Jisu Ryu
Daesin Kim
Kee-Eung Kim
C. Jeong
16
13
0
12 Jun 2022
A Theoretical Understanding of Neural Network Compression from Sparse
  Linear Approximation
A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation
Wenjing Yang
G. Wang
Jie Ding
Yuhong Yang
MLT
25
7
0
11 Jun 2022
A General Framework For Proving The Equivariant Strong Lottery Ticket
  Hypothesis
A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis
Damien Ferbach
Christos Tsirigotis
Gauthier Gidel
Avishek
A. Bose
32
16
0
09 Jun 2022
Meta-ticket: Finding optimal subnetworks for few-shot learning within
  randomly initialized neural networks
Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Daiki Chijiwa
Shinýa Yamaguchi
Atsutoshi Kumagai
Yasutoshi Ida
16
8
0
31 May 2022
Towards efficient feature sharing in MIMO architectures
Towards efficient feature sharing in MIMO architectures
Rémy Sun
Alexandre Ramé
Clément Masson
Nicolas Thome
Matthieu Cord
52
6
0
20 May 2022
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep
  Neural Network, a Survey
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey
Paul Wimmer
Jens Mehnert
A. P. Condurache
DD
34
20
0
17 May 2022
Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective
Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective
Keitaro Sakamoto
Issei Sato
20
9
0
15 May 2022
Previous
1234
Next