ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
v1v2v3v4v5 (latest)

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXiv (abs)PDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

50 / 2,187 papers shown
Deep Polynomial Neural Networks
Deep Polynomial Neural Networks
Grigorios G. Chrysos
Stylianos Moschoglou
Giorgos Bouritsas
Jiankang Deng
Yannis Panagakis
Stefanos Zafeiriou
168
110
0
20 Jun 2020
Paying more attention to snapshots of Iterative Pruning: Improving Model
  Compression via Ensemble Distillation
Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation
Duong H. Le
Vo Trung Nhan
N. Thoai
VLM
96
7
0
20 Jun 2020
Discovering Symbolic Models from Deep Learning with Inductive Biases
Discovering Symbolic Models from Deep Learning with Inductive Biases
M. Cranmer
Alvaro Sanchez-Gonzalez
Peter W. Battaglia
Rui Xu
Kyle Cranmer
D. Spergel
S. Ho
AI4CE
394
581
0
19 Jun 2020
Exploring Weight Importance and Hessian Bias in Model Pruning
Exploring Weight Importance and Hessian Bias in Model Pruning
Mingchen Li
Yahya Sattar
Christos Thrampoulidis
Samet Oymak
228
4
0
19 Jun 2020
Directional Pruning of Deep Neural Networks
Directional Pruning of Deep Neural Networks
Shih-Kang Chao
Zhanyu Wang
Yue Xing
Guang Cheng
ODL
267
35
0
16 Jun 2020
Progressive Skeletonization: Trimming more fat from a network at
  initialization
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Juil Sock
Grégory Rogez
P. Dokania
480
108
0
16 Jun 2020
Finding trainable sparse networks through Neural Tangent Transfer
Finding trainable sparse networks through Neural Tangent Transfer
Tianlin Liu
Friedemann Zenke
176
39
0
15 Jun 2020
Neural gradients are near-lognormal: improved quantized and sparse
  training
Neural gradients are near-lognormal: improved quantized and sparse training
Brian Chmiel
Liad Ben-Uri
Moran Shkolnik
Elad Hoffer
Ron Banner
Daniel Soudry
MQ
261
5
0
15 Jun 2020
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization
  is Sufficient
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is SufficientNeural Information Processing Systems (NeurIPS), 2020
Ankit Pensia
Shashank Rajput
Alliot Nagle
Harit Vishwakarma
Dimitris Papailiopoulos
357
112
0
14 Jun 2020
High-contrast "gaudy" images improve the training of deep neural network
  models of visual cortex
High-contrast "gaudy" images improve the training of deep neural network models of visual cortexNeural Information Processing Systems (NeurIPS), 2020
Benjamin R. Cowley
Jonathan W. Pillow
144
11
0
13 Jun 2020
Dynamic Model Pruning with Feedback
Dynamic Model Pruning with FeedbackInternational Conference on Learning Representations (ICLR), 2020
Tao Lin
Sebastian U. Stich
Luis Barba
Daniil Dmitriev
Martin Jaggi
269
225
0
12 Jun 2020
A Practical Sparse Approximation for Real Time Recurrent Learning
A Practical Sparse Approximation for Real Time Recurrent Learning
Jacob Menick
Erich Elsen
Utku Evci
Simon Osindero
Karen Simonyan
Alex Graves
191
33
0
12 Jun 2020
How many winning tickets are there in one DNN?
How many winning tickets are there in one DNN?
Kathrin Grosse
Michael Backes
UQCV
99
2
0
12 Jun 2020
Neural Path Features and Neural Path Kernel : Understanding the role of
  gates in deep learning
Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learningNeural Information Processing Systems (NeurIPS), 2020
Chandrashekar Lakshminarayanan
Amit Singh
AI4CE
181
11
0
11 Jun 2020
Convolutional neural networks compression with low rank and sparse
  tensor decompositions
Convolutional neural networks compression with low rank and sparse tensor decompositions
Pavel Kaloshin
121
1
0
11 Jun 2020
Pruning neural networks without any data by iteratively conserving
  synaptic flow
Pruning neural networks without any data by iteratively conserving synaptic flow
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
558
755
0
09 Jun 2020
Towards More Practical Adversarial Attacks on Graph Neural Networks
Towards More Practical Adversarial Attacks on Graph Neural Networks
Jiaqi Ma
Shuangrui Ding
Qiaozhu Mei
AAML
233
144
0
09 Jun 2020
A Framework for Neural Network Pruning Using Gibbs Distributions
A Framework for Neural Network Pruning Using Gibbs Distributions
Alex Labach
S. Valaee
140
5
0
08 Jun 2020
Differentiable Neural Input Search for Recommender Systems
Differentiable Neural Input Search for Recommender Systems
Weiyu Cheng
Yanyan Shen
Linpeng Huang
249
38
0
08 Jun 2020
Neural Sparse Representation for Image Restoration
Neural Sparse Representation for Image Restoration
Yuchen Fan
Jiahui Yu
Yiqun Mei
Yulun Zhang
Y. Fu
Ding Liu
Thomas S. Huang
64
36
0
08 Jun 2020
An Empirical Analysis of the Impact of Data Augmentation on Knowledge
  Distillation
An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation
Deepan Das
Haley Massa
Abhimanyu Kulkarni
Theodoros Rekatsinas
190
19
0
06 Jun 2020
Accelerating Natural Language Understanding in Task-Oriented Dialog
Accelerating Natural Language Understanding in Task-Oriented Dialog
Ojas Ahuja
Shrey Desai
VLM
119
1
0
05 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
346
114
0
05 Jun 2020
Shapley Value as Principled Metric for Structured Network Pruning
Shapley Value as Principled Metric for Structured Network Pruning
Marco Ancona
Cengiz Öztireli
Markus Gross
162
10
0
02 Jun 2020
Sparse Perturbations for Improved Convergence in Stochastic Zeroth-Order
  Optimization
Sparse Perturbations for Improved Convergence in Stochastic Zeroth-Order OptimizationInternational Conference on Machine Learning, Optimization, and Data Science (MOD), 2020
Mayumi Ohta
Nathaniel Berger
Artem Sokolov
Stefan Riezler
ODL
127
10
0
02 Jun 2020
Pruning via Iterative Ranking of Sensitivity Statistics
Pruning via Iterative Ranking of Sensitivity Statistics
Stijn Verdenius
M. Stol
Patrick Forré
AAML
174
42
0
01 Jun 2020
Transferring Inductive Biases through Knowledge Distillation
Transferring Inductive Biases through Knowledge Distillation
Samira Abnar
Mostafa Dehghani
Willem H. Zuidema
307
67
0
31 May 2020
Geometric algorithms for predicting resilience and recovering damage in
  neural networks
Geometric algorithms for predicting resilience and recovering damage in neural networks
G. Raghavan
Jiayi Li
Matt Thomson
AAML
136
0
0
23 May 2020
Feature Purification: How Adversarial Training Performs Robust Deep
  Learning
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLTAAML
411
167
0
20 May 2020
Dynamic Sparsity Neural Networks for Automatic Speech Recognition
Dynamic Sparsity Neural Networks for Automatic Speech Recognition
Zhaofeng Wu
Ding Zhao
Qiao Liang
Jiahui Yu
Anmol Gulati
Ruoming Pang
141
44
0
16 May 2020
Joint Progressive Knowledge Distillation and Unsupervised Domain
  Adaptation
Joint Progressive Knowledge Distillation and Unsupervised Domain Adaptation
Le Thanh Nguyen-Meidine
Mohammadhadi Shateri
M. Kiran
Jose Dolz
Louis-Antoine Blais-Morin
188
23
0
16 May 2020
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With
  Trainable Masked Layers
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
Junjie Liu
Zhe Xu
Runbin Shi
R. Cheung
Hayden Kwok-Hay So
181
132
0
14 May 2020
RSO: A Gradient Free Sampling Based Approach For Training Deep Neural
  Networks
RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks
Rohun Tripathi
Bharat Singh
109
8
0
12 May 2020
On the Transferability of Winning Tickets in Non-Natural Image Datasets
On the Transferability of Winning Tickets in Non-Natural Image Datasets
M. Sabatelli
M. Kestemont
Pierre Geurts
180
16
0
11 May 2020
Data-Free Network Quantization With Adversarial Knowledge Distillation
Data-Free Network Quantization With Adversarial Knowledge Distillation
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
MQ
217
140
0
08 May 2020
Efficient Exact Verification of Binarized Neural Networks
Efficient Exact Verification of Binarized Neural Networks
Kai Jia
Martin Rinard
AAMLMQ
174
66
0
07 May 2020
Sources of Transfer in Multilingual Named Entity Recognition
Sources of Transfer in Multilingual Named Entity RecognitionAnnual Meeting of the Association for Computational Linguistics (ACL), 2020
David Mueller
Nicholas Andrews
Mark Dredze
149
23
0
02 May 2020
When BERT Plays the Lottery, All Tickets Are Winning
When BERT Plays the Lottery, All Tickets Are WinningConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Sai Prasanna
Anna Rogers
Anna Rumshisky
MILM
309
200
0
01 May 2020
Pruning artificial neural networks: a way to find well-generalizing,
  high-entropy sharp minima
Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minimaInternational Conference on Artificial Neural Networks (ICANN), 2020
Enzo Tartaglione
Andrea Bragagnolo
Marco Grangetto
155
13
0
30 Apr 2020
Out-of-the-box channel pruned networks
Out-of-the-box channel pruned networks
Ragav Venkatesan
Gurumurthy Swaminathan
Xiong Zhou
Anna Luo
114
0
0
30 Apr 2020
Learning to Learn to Disambiguate: Meta-Learning for Few-Shot Word Sense
  Disambiguation
Learning to Learn to Disambiguate: Meta-Learning for Few-Shot Word Sense DisambiguationFindings (Findings), 2020
Nithin Holla
Pushkar Mishra
H. Yannakoudakis
Ekaterina Shutova
276
30
0
29 Apr 2020
WoodFisher: Efficient Second-Order Approximation for Neural Network
  Compression
WoodFisher: Efficient Second-Order Approximation for Neural Network Compression
Sidak Pal Singh
Dan Alistarh
271
29
0
29 Apr 2020
Masking as an Efficient Alternative to Finetuning for Pretrained
  Language Models
Masking as an Efficient Alternative to Finetuning for Pretrained Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Mengjie Zhao
Tao Lin
Fei Mi
Martin Jaggi
Hinrich Schütze
254
127
0
26 Apr 2020
How fine can fine-tuning be? Learning efficient language models
How fine can fine-tuning be? Learning efficient language modelsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Evani Radiya-Dixit
Xin Wang
154
73
0
24 Apr 2020
Convolution-Weight-Distribution Assumption: Rethinking the Criteria of
  Channel Pruning
Convolution-Weight-Distribution Assumption: Rethinking the Criteria of Channel Pruning
Zhongzhan Huang
Wenqi Shao
Xinjiang Wang
Liang Lin
Ping Luo
261
64
0
24 Apr 2020
SIPA: A Simple Framework for Efficient Networks
SIPA: A Simple Framework for Efficient Networks
Gihun Lee
Sangmin Bae
Jaehoon Oh
Seyoung Yun
116
1
0
24 Apr 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
487
189
0
23 Apr 2020
Lottery Hypothesis based Unsupervised Pre-training for Model Compression
  in Federated Learning
Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated LearningIEEE Vehicular Technology Conference (VTC), 2020
Sohei Itahara
Takayuki Nishio
M. Morikura
Koji Yamamoto
111
12
0
21 Apr 2020
Neural Status Registers
Neural Status RegistersInternational Conference on Machine Learning (ICML), 2020
Lukas Faber
Roger Wattenhofer
139
9
0
15 Apr 2020
Prune2Edge: A Multi-Phase Pruning Pipelines to Deep Ensemble Learning in
  IIoT
Prune2Edge: A Multi-Phase Pruning Pipelines to Deep Ensemble Learning in IIoT
Besher Alhalabi
M. Gaber
S. Basurra
114
2
0
09 Apr 2020
Previous
123...394041424344
Next
Page 40 of 44
Pageof 44