ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
v1v2v3v4v5 (latest)

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXiv (abs)PDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

36 / 2,186 papers shown
SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained
  Microcontrollers
SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained MicrocontrollersNeural Information Processing Systems (NeurIPS), 2019
Igor Fedorov
Ryan P. Adams
Matthew Mattina
P. Whatmough
240
174
0
28 May 2019
Self-supervised audio representation learning for mobile devices
Self-supervised audio representation learning for mobile devices
Marco Tagliasacchi
Beat Gfeller
Félix de Chaumont Quitry
Dominik Roblek
SSLAI4TS
157
47
0
24 May 2019
Learning Low-Rank Approximation for CNNs
Learning Low-Rank Approximation for CNNs
Dongsoo Lee
S. Kwon
Byeongwook Kim
Gu-Yeon Wei
253
23
0
24 May 2019
Structured Compression by Weight Encryption for Unstructured Pruning and
  Quantization
Structured Compression by Weight Encryption for Unstructured Pruning and QuantizationComputer Vision and Pattern Recognition (CVPR), 2019
S. Kwon
Dongsoo Lee
Byeongwook Kim
Parichay Kapoor
Baeseong Park
Gu-Yeon Wei
MQ
249
57
0
24 May 2019
Exploring Structural Sparsity of Deep Networks via Inverse Scale Spaces
Exploring Structural Sparsity of Deep Networks via Inverse Scale SpacesIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019
Yanwei Fu
Chen Liu
Donghao Li
Zuyuan Zhong
Xinwei Sun
Jinshan Zeng
Xingtai Lv
244
13
0
23 May 2019
Sparse Transfer Learning via Winning Lottery Tickets
Sparse Transfer Learning via Winning Lottery Tickets
Rahul Mehta
UQCV
124
48
0
19 May 2019
Network Pruning for Low-Rank Binary Indexing
Network Pruning for Low-Rank Binary Indexing
Dongsoo Lee
S. Kwon
Byeongwook Kim
Parichay Kapoor
Gu-Yeon Wei
152
6
0
14 May 2019
Analysis of Gene Interaction Graphs as Prior Knowledge for Machine
  Learning Models
Analysis of Gene Interaction Graphs as Prior Knowledge for Machine Learning Models
Paul Bertin
Mohammad Hashir
Martin Weiss
Vincent Frappier
T. Perkins
G. Boucher
Joseph Paul Cohen
207
5
0
06 May 2019
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Deconstructing Lottery Tickets: Zeros, Signs, and the SupermaskNeural Information Processing Systems (NeurIPS), 2019
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
483
413
0
03 May 2019
Differentiable Visual Computing
Differentiable Visual Computing
Tzu-Mao Li
148
17
0
27 Apr 2019
Low-Memory Neural Network Training: A Technical Report
Low-Memory Neural Network Training: A Technical Report
N. Sohoni
Christopher R. Aberger
Megan Leszczynski
Jian Zhang
Christopher Ré
253
110
0
24 Apr 2019
Filter Pruning by Switching to Neighboring CNNs with Good Attributes
Filter Pruning by Switching to Neighboring CNNs with Good Attributes
Yang He
Ping Liu
Linchao Zhu
Yi Yang
VLM
205
59
0
08 Apr 2019
Adversarial Robustness vs Model Compression, or Both?
Adversarial Robustness vs Model Compression, or Both?
Shaokai Ye
Kaidi Xu
Sijia Liu
Jan-Henrik Lambrechts
Huan Zhang
Aojun Zhou
Kaisheng Ma
Yanzhi Wang
Xue Lin
AAML
296
172
0
29 Mar 2019
How Can We Be So Dense? The Benefits of Using Highly Sparse
  Representations
How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
Subutai Ahmad
Luiz Scheinkman
185
102
0
27 Mar 2019
Convolution with even-sized kernels and symmetric padding
Convolution with even-sized kernels and symmetric paddingNeural Information Processing Systems (NeurIPS), 2019
Shuang Wu
Guanrui Wang
Pei Tang
F. Chen
Luping Shi
202
77
0
20 Mar 2019
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
Zahra Atashgahi
Joost Pieterse
Shiwei Liu
Decebal Constantin Mocanu
Raymond N. J. Veldhuis
Mykola Pechenizkiy
316
19
0
17 Mar 2019
Stabilizing the Lottery Ticket Hypothesis
Stabilizing the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
174
104
0
05 Mar 2019
Regularity Normalization: Neuroscience-Inspired Unsupervised Attention
  across Neural Network Layers
Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network LayersEntropy (Entropy), 2019
Baihan Lin
670
2
0
27 Feb 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
396
332
0
15 Feb 2019
Identity Crisis: Memorization and Generalization under Extreme
  Overparameterization
Identity Crisis: Memorization and Generalization under Extreme OverparameterizationInternational Conference on Learning Representations (ICLR), 2019
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Michael C. Mozer
Y. Singer
306
95
0
13 Feb 2019
Intrinsically Sparse Long Short-Term Memory Networks
Intrinsically Sparse Long Short-Term Memory Networks
Shiwei Liu
Decebal Constantin Mocanu
Mykola Pechenizkiy
136
9
0
26 Jan 2019
Sparse evolutionary Deep Learning with over one million artificial
  neurons on commodity hardware
Sparse evolutionary Deep Learning with over one million artificial neurons on commodity hardware
Shiwei Liu
Decebal Constantin Mocanu
A. R. Ramapuram Matavalam
Yulong Pei
Mykola Pechenizkiy
BDL
300
94
0
26 Jan 2019
A Theoretical Analysis of Deep Q-Learning
A Theoretical Analysis of Deep Q-Learning
Jianqing Fan
Zhuoran Yang
Yuchen Xie
Zhaoran Wang
583
704
0
01 Jan 2019
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
Dawei Li
Tian Ding
Tian Ding
588
41
0
28 Dec 2018
Artificial neural networks condensation: A strategy to facilitate
  adaption of machine learning in medical settings by reducing computational
  burden
Artificial neural networks condensation: A strategy to facilitate adaption of machine learning in medical settings by reducing computational burden
Dianbo Liu
N. Sepulveda
Ming Zheng
163
7
0
23 Dec 2018
Neural Rejuvenation: Improving Deep Network Training by Enhancing
  Computational Resource Utilization
Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization
Siyuan Qiao
Zhe Lin
Jianming Zhang
Alan Yuille
149
23
0
02 Dec 2018
Structured Pruning of Neural Networks with Budget-Aware Regularization
Structured Pruning of Neural Networks with Budget-Aware RegularizationComputer Vision and Pattern Recognition (CVPR), 2018
Carl Lemaire
Andrew Achkar
Pierre-Marc Jodoin
219
95
0
23 Nov 2018
Understanding Learning Dynamics Of Language Models with SVCCA
Understanding Learning Dynamics Of Language Models with SVCCA
Naomi Saphra
Adam Lopez
419
112
0
01 Nov 2018
The Deep Weight Prior
The Deep Weight Prior
Andrei Atanov
Arsenii Ashukha
Kirill Struminsky
Dmitry Vetrov
Max Welling
BDL
213
37
0
16 Oct 2018
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
299
1,608
0
11 Oct 2018
A Closer Look at Structured Pruning for Neural Network Compression
A Closer Look at Structured Pruning for Neural Network Compression
Elliot J. Crowley
Jack Turner
Amos Storkey
Michael F. P. O'Boyle
3DPC
185
31
0
10 Oct 2018
Learning with Random Learning Rates
Learning with Random Learning Rates
Léonard Blier
Pierre Wolinski
Yann Ollivier
OOD
264
21
0
02 Oct 2018
To compress or not to compress: Understanding the Interactions between
  Adversarial Attacks and Neural Network Compression
To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
Ross J. Anderson
AAML
207
43
0
29 Sep 2018
Dense neural networks as sparse graphs and the lightning initialization
Dense neural networks as sparse graphs and the lightning initialization
T. Pircher
D. Haspel
Eberhard Schlücker
42
1
0
24 Sep 2018
Learning Representations for Neural Network-Based Classification Using
  the Information Bottleneck Principle
Learning Representations for Neural Network-Based Classification Using the Information Bottleneck PrincipleIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2018
Rana Ali Amjad
Bernhard C. Geiger
464
208
0
27 Feb 2018
Nonparametric regression using deep neural networks with ReLU activation
  function
Nonparametric regression using deep neural networks with ReLU activation function
Johannes Schmidt-Hieber
634
926
0
22 Aug 2017
Previous
123...424344