ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
v1v2v3v4v5 (latest)

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXiv (abs)PDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

50 / 2,186 papers shown
Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio
Network Adjustment: Channel Search Guided by FLOPs Utilization RatioComputer Vision and Pattern Recognition (CVPR), 2020
Zhengsu Chen
J. Niu
Lingxi Xie
Xuefeng Liu
Longhui Wei
Qi Tian
150
14
0
06 Apr 2020
Composition of Saliency Metrics for Channel Pruning with a Myopic Oracle
Composition of Saliency Metrics for Channel Pruning with a Myopic OracleIEEE Symposium Series on Computational Intelligence (IEEE SSCI), 2020
Kaveena Persand
Andrew Anderson
David Gregg
108
3
0
03 Apr 2020
Learning Sparse & Ternary Neural Networks with Entropy-Constrained
  Trained Ternarization (EC2T)
Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T)
Arturo Marbán
Daniel Becking
Simon Wiedemann
Wojciech Samek
MQ
161
14
0
02 Apr 2020
Nonconvex sparse regularization for deep neural networks and its
  optimality
Nonconvex sparse regularization for deep neural networks and its optimalityNeural Computation (Neural Comput.), 2020
Ilsang Ohn
Yongdai Kim
161
21
0
26 Mar 2020
CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through
  Context
CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through ContextIEEE/RJS International Conference on Intelligent RObots and Systems (IROS), 2020
Wenyu Zhang
Skyler Seto
Devesh K. Jha
315
5
0
26 Mar 2020
Born-Again Tree Ensembles
Born-Again Tree EnsemblesInternational Conference on Machine Learning (ICML), 2020
Thibaut Vidal
Toni Pacheco
Maximilian Schiffer
246
59
0
24 Mar 2020
Steepest Descent Neural Architecture Optimization: Escaping Local
  Optimum with Signed Neural Splitting
Steepest Descent Neural Architecture Optimization: Escaping Local Optimum with Signed Neural Splitting
Lemeng Wu
Mao Ye
Qi Lei
Jason D. Lee
Qiang Liu
285
15
0
23 Mar 2020
Convergence of Artificial Intelligence and High Performance Computing on
  NSF-supported Cyberinfrastructure
Convergence of Artificial Intelligence and High Performance Computing on NSF-supported CyberinfrastructureJournal of Big Data (J Big Data), 2020
Eliu A. Huerta
Asad Khan
Edward Davis
Colleen Bushell
W. Gropp
...
S. Koric
William T. C. Kramer
Brendan McGinty
Kenton McHenry
Aaron Saxton
AI4CE
210
53
0
18 Mar 2020
SASL: Saliency-Adaptive Sparsity Learning for Neural Network
  Acceleration
SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration
Jun Shi
Jianfeng Xu
K. Tasaka
Zhibo Chen
178
25
0
12 Mar 2020
How Powerful Are Randomly Initialized Pointcloud Set Functions?
How Powerful Are Randomly Initialized Pointcloud Set Functions?
Aditya Sanghi
P. Jayaraman
3DPC
114
3
0
11 Mar 2020
Towards CRISP-ML(Q): A Machine Learning Process Model with Quality
  Assurance Methodology
Towards CRISP-ML(Q): A Machine Learning Process Model with Quality Assurance MethodologyMachine Learning and Knowledge Extraction (MLKE), 2020
Stefan Studer
T. Bui
C. Drescher
A. Hanuschkin
Ludwig Winkler
S. Peters
Klaus-Robert Muller
278
220
0
11 Mar 2020
Pruned Neural Networks are Surprisingly Modular
Pruned Neural Networks are Surprisingly Modular
Daniel Filan
Shlomi Hod
Cody Wild
Andrew Critch
Stuart J. Russell
294
8
0
10 Mar 2020
Channel Pruning via Optimal Thresholding
Channel Pruning via Optimal ThresholdingInternational Conference on Neural Information Processing (ICONIP), 2020
Yun Ye
Ganmei You
Jong-Kae Fwu
Xia Zhu
Q. Yang
Yuan Zhu
216
16
0
10 Mar 2020
$Π-$nets: Deep Polynomial Neural Networks
Π−Π-Π−nets: Deep Polynomial Neural NetworksComputer Vision and Pattern Recognition (CVPR), 2020
Grigorios G. Chrysos
Stylianos Moschoglou
Giorgos Bouritsas
Yannis Panagakis
Jiankang Deng
Stefanos Zafeiriou
143
65
0
08 Mar 2020
FedLoc: Federated Learning Framework for Data-Driven Cooperative
  Localization and Location Data Processing
FedLoc: Federated Learning Framework for Data-Driven Cooperative Localization and Location Data Processing
Feng Yin
Zhidi Lin
Yue Xu
Qinglei Kong
Deshi Li
Sergios Theodoridis
Shuguang Cui
Cui
FedML
232
4
0
08 Mar 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?Conference on Machine Learning and Systems (MLSys), 2020
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
607
1,181
0
06 Mar 2020
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Bai Li
Shiqi Wang
Yunhan Jia
Yantao Lu
Zhenyu Zhong
Lawrence Carin
Suman Jana
AAML
218
14
0
06 Mar 2020
Train-by-Reconnect: Decoupling Locations of Weights from their Values
Train-by-Reconnect: Decoupling Locations of Weights from their Values
Yushi Qiu
R. Suda
154
0
0
05 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network PruningInternational Conference on Learning Representations (ICLR), 2020
Alex Renda
Jonathan Frankle
Michael Carbin
543
421
0
05 Mar 2020
Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection
Good Subnetworks Provably Exist: Pruning via Greedy Forward SelectionInternational Conference on Machine Learning (ICML), 2020
Mao Ye
Chengyue Gong
Lizhen Nie
Denny Zhou
Adam R. Klivans
Qiang Liu
350
121
0
03 Mar 2020
A New MRAM-based Process In-Memory Accelerator for Efficient Neural
  Network Training with Floating Point Precision
A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point PrecisionInternational Symposium on Circuits and Systems (ISCAS), 2020
Hongjie Wang
Yang Zhao
Chaojian Li
Yue Wang
Yingyan Lin
98
15
0
02 Mar 2020
MBGD-RDA Training and Rule Pruning for Concise TSK Fuzzy Regression
  Models
MBGD-RDA Training and Rule Pruning for Concise TSK Fuzzy Regression Models
Dongrui Wu
69
1
0
01 Mar 2020
Channel Equilibrium Networks for Learning Deep Representation
Channel Equilibrium Networks for Learning Deep RepresentationInternational Conference on Machine Learning (ICML), 2020
Wenqi Shao
Shitao Tang
Xingang Pan
Ping Tan
Xiaogang Wang
Ping Luo
144
17
0
29 Feb 2020
Learned Threshold Pruning
Learned Threshold Pruning
K. Azarian
Brandon Smart
Jinwon Lee
Tijmen Blankevoort
MQ
223
40
0
28 Feb 2020
Learning in the Frequency Domain
Learning in the Frequency DomainComputer Vision and Pattern Recognition (CVPR), 2020
Kai Xu
Minghai Qin
Fei Sun
Yuhao Wang
Yen-kuang Chen
Fengbo Ren
338
509
0
27 Feb 2020
A Primer in BERTology: What we know about how BERT works
A Primer in BERTology: What we know about how BERT worksTransactions of the Association for Computational Linguistics (TACL), 2020
Anna Rogers
Olga Kovaleva
Anna Rumshisky
OffRL
470
1,717
0
27 Feb 2020
Deep Randomized Neural Networks
Deep Randomized Neural Networks
Claudio Gallicchio
Simone Scardapane
OOD
221
71
0
27 Feb 2020
Compressing Large-Scale Transformer-Based Models: A Case Study on BERT
Compressing Large-Scale Transformer-Based Models: A Case Study on BERTTransactions of the Association for Computational Linguistics (TACL), 2020
Prakhar Ganesh
Yao Chen
Xin Lou
Mohammad Ali Khan
Yifan Yang
Hassan Sajjad
Preslav Nakov
Deming Chen
Marianne Winslett
AI4CE
435
213
0
27 Feb 2020
Train Large, Then Compress: Rethinking Model Size for Efficient Training
  and Inference of Transformers
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Zhuohan Li
Eric Wallace
Sheng Shen
Kevin Lin
Kurt Keutzer
Dan Klein
Joseph E. Gonzalez
284
152
0
26 Feb 2020
Predicting Neural Network Accuracy from Weights
Predicting Neural Network Accuracy from Weights
Thomas Unterthiner
Daniel Keysers
Sylvain Gelly
Olivier Bousquet
Ilya O. Tolstikhin
437
120
0
26 Feb 2020
HYDRA: Pruning Adversarially Robust Neural Networks
HYDRA: Pruning Adversarially Robust Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
208
25
0
24 Feb 2020
The Early Phase of Neural Network Training
The Early Phase of Neural Network TrainingInternational Conference on Learning Representations (ICLR), 2020
Jonathan Frankle
D. Schwab
Ari S. Morcos
365
190
0
24 Feb 2020
Neuron Shapley: Discovering the Responsible Neurons
Neuron Shapley: Discovering the Responsible NeuronsNeural Information Processing Systems (NeurIPS), 2020
Amirata Ghorbani
James Zou
FAttTDI
249
137
0
23 Feb 2020
Compressing BERT: Studying the Effects of Weight Pruning on Transfer
  Learning
Compressing BERT: Studying the Effects of Weight Pruning on Transfer LearningWorkshop on Representation Learning for NLP (RepL4NLP), 2020
Mitchell A. Gordon
Kevin Duh
Nicholas Andrews
VLM
279
363
0
19 Feb 2020
Robust Pruning at Initialization
Robust Pruning at InitializationInternational Conference on Learning Representations (ICLR), 2020
Soufiane Hayou
Jean-François Ton
Arnaud Doucet
Yee Whye Teh
162
49
0
19 Feb 2020
Identifying Critical Neurons in ANN Architectures using Mixed Integer
  Programming
Identifying Critical Neurons in ANN Architectures using Mixed Integer ProgrammingIntegration of AI and OR Techniques in Constraint Programming (CPAIOR), 2020
M. Elaraby
Guy Wolf
Margarida Carvalho
163
5
0
17 Feb 2020
DeepLight: Deep Lightweight Feature Interactions for Accelerating CTR
  Predictions in Ad Serving
DeepLight: Deep Lightweight Feature Interactions for Accelerating CTR Predictions in Ad Serving
Wei Deng
Junwei Pan
Tian Zhou
Deguang Kong
Aaron Flores
Guang Lin
216
4
0
17 Feb 2020
The Differentially Private Lottery Ticket Mechanism
The Differentially Private Lottery Ticket Mechanism
Lovedeep Gondara
Ke Wang
Ricardo Silva Carvalho
84
3
0
16 Feb 2020
Lookahead: A Far-Sighted Alternative of Magnitude-based Pruning
Lookahead: A Far-Sighted Alternative of Magnitude-based PruningInternational Conference on Learning Representations (ICLR), 2020
Sejun Park
Jaeho Lee
Sangwoo Mo
Jinwoo Shin
109
100
0
12 Feb 2020
A study of local optima for learning feature interactions using neural
  networks
A study of local optima for learning feature interactions using neural networksIEEE International Joint Conference on Neural Network (IJCNN), 2020
Yangzi Guo
Adrian Barbu
215
1
0
11 Feb 2020
Deep Gated Networks: A framework to understand training and
  generalisation in deep learning
Deep Gated Networks: A framework to understand training and generalisation in deep learning
Chandrashekar Lakshminarayanan
Amit Singh
AI4CE
98
2
0
10 Feb 2020
Calibrate and Prune: Improving Reliability of Lottery Tickets Through
  Prediction Calibration
Calibrate and Prune: Improving Reliability of Lottery Tickets Through Prediction Calibration
Bindya Venkatesh
Jayaraman J. Thiagarajan
Kowshik Thopalli
P. Sattigeri
221
14
0
10 Feb 2020
Convolutional Neural Network Pruning Using Filter Attenuation
Convolutional Neural Network Pruning Using Filter AttenuationInternational Conference on Information Photonics (ICIP), 2020
Morteza Mousa Pasandi
M. Hajabdollahi
N. Karimi
S. Samavi
S. Shirani
3DPC
74
5
0
09 Feb 2020
Soft Threshold Weight Reparameterization for Learnable Sparsity
Soft Threshold Weight Reparameterization for Learnable SparsityInternational Conference on Machine Learning (ICML), 2020
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
628
263
0
08 Feb 2020
PixelHop++: A Small Successive-Subspace-Learning-Based (SSL-based) Model
  for Image Classification
PixelHop++: A Small Successive-Subspace-Learning-Based (SSL-based) Model for Image ClassificationInternational Conference on Information Photonics (ICIP), 2020
Yueru Chen
Mozhdeh Rouhsedaghat
Suya You
Raghuveer Rao
C.-C. Jay Kuo
138
73
0
08 Feb 2020
Activation Density driven Energy-Efficient Pruning in Training
Activation Density driven Energy-Efficient Pruning in Training
Timothy Foldy-Porto
Yeshwanth Venkatesha
Priyadarshini Panda
175
5
0
07 Feb 2020
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
BERT-of-Theseus: Compressing BERT by Progressive Module ReplacingConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
669
219
0
07 Feb 2020
Multimodal Controller for Generative Models
Multimodal Controller for Generative Models
Enmao Diao
Jie Ding
Vahid Tarokh
281
3
0
07 Feb 2020
BABO: Background Activation Black-Out for Efficient Object Detection
BABO: Background Activation Black-Out for Efficient Object Detection
Byungseok Roh
Hankyu Cho
Myung-Ho Ju
Soon Hyung Pyo
ObjD
152
1
0
05 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You NeedInternational Conference on Machine Learning (ICML), 2020
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
296
312
0
03 Feb 2020
Previous
123...4041424344
Next