Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.03635
Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
9 March 2018
Jonathan Frankle
Michael Carbin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"
50 / 599 papers shown
Title
FreezeNet: Full Performance by Reduced Storage Costs
Paul Wimmer
Jens Mehnert
A. P. Condurache
23
13
0
28 Nov 2020
Bringing AI To Edge: From Deep Learning's Perspective
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
52
116
0
25 Nov 2020
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
26
25
0
20 Nov 2020
Dynamic Hard Pruning of Neural Networks at the Edge of the Internet
Lorenzo Valerio
F. M. Nardini
A. Passarella
R. Perego
12
12
0
17 Nov 2020
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks
Enzo Tartaglione
Andrea Bragagnolo
A. Fiandrotti
Marco Grangetto
ODL
UQCV
15
34
0
16 Nov 2020
Gaussian Processes with Skewed Laplace Spectral Mixture Kernels for Long-term Forecasting
Kai Chen
Twan van Laarhoven
E. Marchiori
AI4TS
37
8
0
08 Nov 2020
Low-Complexity Models for Acoustic Scene Classification Based on Receptive Field Regularization and Frequency Damping
Khaled Koutini
Florian Henkel
Hamid Eghbalzadeh
Gerhard Widmer
14
9
0
05 Nov 2020
A Bayesian Perspective on Training Speed and Model Selection
Clare Lyle
Lisa Schut
Binxin Ru
Y. Gal
Mark van der Wilk
39
24
0
27 Oct 2020
ShiftAddNet: A Hardware-Inspired Deep Network
Haoran You
Xiaohan Chen
Yongan Zhang
Chaojian Li
Sicheng Li
Zihao Liu
Zhangyang Wang
Yingyan Lin
OOD
MQ
73
76
0
24 Oct 2020
Brain-Inspired Learning on Neuromorphic Substrates
Friedemann Zenke
Emre Neftci
33
87
0
22 Oct 2020
Mixed-Precision Embedding Using a Cache
J. Yang
Jianyu Huang
Jongsoo Park
P. T. P. Tang
Andrew Tulloch
19
36
0
21 Oct 2020
Learning to Embed Categorical Features without Embedding Tables for Recommendation
Wang-Cheng Kang
D. Cheng
Tiansheng Yao
Xinyang Yi
Ting-Li Chen
Lichan Hong
Ed H. Chi
LMTD
CML
DML
50
68
0
21 Oct 2020
Variational Capsule Encoder
Harish RaviPrakash
Syed Muhammad Anwar
Ulas Bagci
BDL
DRL
13
2
0
18 Oct 2020
Training independent subnetworks for robust prediction
Marton Havasi
Rodolphe Jenatton
Stanislav Fort
Jeremiah Zhe Liu
Jasper Snoek
Balaji Lakshminarayanan
Andrew M. Dai
Dustin Tran
UQCV
OOD
30
208
0
13 Oct 2020
Pretrained Transformers for Text Ranking: BERT and Beyond
Jimmy J. Lin
Rodrigo Nogueira
Andrew Yates
VLM
219
610
0
13 Oct 2020
Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks
Róbert Csordás
Sjoerd van Steenkiste
Jürgen Schmidhuber
37
87
0
05 Oct 2020
Pruning Convolutional Filters using Batch Bridgeout
Najeeb Khan
Ian Stavness
19
3
0
23 Sep 2020
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Jingtong Su
Yihang Chen
Tianle Cai
Tianhao Wu
Ruiqi Gao
Liwei Wang
J. Lee
6
85
0
22 Sep 2020
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks
Zhiqiang Shen
Marios Savvides
23
63
0
17 Sep 2020
Multi-Task Learning with Deep Neural Networks: A Survey
M. Crawshaw
CVBM
30
608
0
10 Sep 2020
CNNPruner: Pruning Convolutional Neural Networks with Visual Analytics
Guan Li
Junpeng Wang
Han-Wei Shen
Kaixin Chen
Guihua Shan
Zhonghua Lu
AAML
21
47
0
08 Sep 2020
It's Hard for Neural Networks To Learn the Game of Life
Jacob Mitchell Springer
Garrett T. Kenyon
11
21
0
03 Sep 2020
Training Sparse Neural Networks using Compressed Sensing
Jonathan W. Siegel
Jianhong Chen
Pengchuan Zhang
Jinchao Xu
26
5
0
21 Aug 2020
LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets
Ang Li
Jingwei Sun
Binghui Wang
Lin Duan
Sicheng Li
Yiran Chen
H. Li
FedML
6
125
0
07 Aug 2020
Linear discriminant initialization for feed-forward neural networks
Marissa Masden
D. Sinha
FedML
29
3
0
24 Jul 2020
The Representation Theory of Neural Networks
M. Armenta
Pierre-Marc Jodoin
19
30
0
23 Jul 2020
Probabilistic Active Meta-Learning
Jean Kaddour
Steindór Sæmundsson
M. Deisenroth
19
34
0
17 Jul 2020
T-Basis: a Compact Representation for Neural Networks
Anton Obukhov
M. Rakhuba
Stamatios Georgoulis
Menelaos Kanakis
Dengxin Dai
Luc Van Gool
39
27
0
13 Jul 2020
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization?
Yaniv Blumenfeld
D. Gilboa
Daniel Soudry
ODL
20
13
0
02 Jul 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
53
81
0
02 Jul 2020
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Dmitry Lepikhin
HyoukJoong Lee
Yuanzhong Xu
Dehao Chen
Orhan Firat
Yanping Huang
M. Krikun
Noam M. Shazeer
Z. Chen
MoE
20
1,106
0
30 Jun 2020
Training highly effective connectivities within neural networks with randomly initialized, fixed weights
Cristian Ivan
Razvan V. Florian
19
4
0
30 Jun 2020
Supermasks in Superposition
Mitchell Wortsman
Vivek Ramanujan
Rosanne Liu
Aniruddha Kembhavi
Mohammad Rastegari
J. Yosinski
Ali Farhadi
SSL
CLL
17
279
0
26 Jun 2020
Data-dependent Pruning to find the Winning Lottery Ticket
Dániel Lévai
Zsolt Zombori
UQCV
6
0
0
25 Jun 2020
Revisiting Loss Modelling for Unstructured Pruning
César Laurent
Camille Ballas
Thomas George
Nicolas Ballas
Pascal Vincent
22
14
0
22 Jun 2020
Logarithmic Pruning is All You Need
Laurent Orseau
Marcus Hutter
Omar Rivasplata
23
88
0
22 Jun 2020
Deep Polynomial Neural Networks
Grigorios G. Chrysos
Stylianos Moschoglou
Giorgos Bouritsas
Jiankang Deng
Yannis Panagakis
S. Zafeiriou
29
92
0
20 Jun 2020
Directional Pruning of Deep Neural Networks
Shih-Kang Chao
Zhanyu Wang
Yue Xing
Guang Cheng
ODL
13
33
0
16 Jun 2020
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip H. S. Torr
Grégory Rogez
P. Dokania
31
95
0
16 Jun 2020
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient
Ankit Pensia
Shashank Rajput
Alliot Nagle
Harit Vishwakarma
Dimitris Papailiopoulos
19
102
0
14 Jun 2020
High-contrast "gaudy" images improve the training of deep neural network models of visual cortex
Benjamin R. Cowley
Jonathan W. Pillow
24
10
0
13 Jun 2020
Towards More Practical Adversarial Attacks on Graph Neural Networks
Jiaqi Ma
Shuangrui Ding
Qiaozhu Mei
AAML
17
119
0
09 Jun 2020
A Framework for Neural Network Pruning Using Gibbs Distributions
Alex Labach
S. Valaee
9
5
0
08 Jun 2020
An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation
Deepan Das
Haley Massa
Abhimanyu Kulkarni
Theodoros Rekatsinas
21
18
0
06 Jun 2020
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
27
146
0
20 May 2020
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
Junjie Liu
Zhe Xu
Runbin Shi
R. Cheung
Hayden Kwok-Hay So
9
119
0
14 May 2020
Data-Free Network Quantization With Adversarial Knowledge Distillation
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
MQ
18
119
0
08 May 2020
Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima
Enzo Tartaglione
Andrea Bragagnolo
Marco Grangetto
21
11
0
30 Apr 2020
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
36
172
0
23 Apr 2020
Previous
1
2
3
...
10
11
12
Next