Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.03635
Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
9 March 2018
Jonathan Frankle
Michael Carbin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"
49 / 599 papers shown
Title
CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Wenyu Zhang
Skyler Seto
Devesh K. Jha
18
5
0
26 Mar 2020
Born-Again Tree Ensembles
Thibaut Vidal
Toni Pacheco
Maximilian Schiffer
59
53
0
24 Mar 2020
SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration
Jun Shi
Jianfeng Xu
K. Tasaka
Zhibo Chen
4
25
0
12 Mar 2020
Π
−
Π-
Π
−
nets: Deep Polynomial Neural Networks
Grigorios G. Chrysos
Stylianos Moschoglou
Giorgos Bouritsas
Yannis Panagakis
Jiankang Deng
S. Zafeiriou
29
58
0
08 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
224
383
0
05 Mar 2020
Learning in the Frequency Domain
Kai Xu
Minghai Qin
Fei Sun
Yuhao Wang
Yen-kuang Chen
Fengbo Ren
39
393
0
27 Feb 2020
Deep Randomized Neural Networks
Claudio Gallicchio
Simone Scardapane
OOD
41
61
0
27 Feb 2020
Predicting Neural Network Accuracy from Weights
Thomas Unterthiner
Daniel Keysers
Sylvain Gelly
Olivier Bousquet
Ilya O. Tolstikhin
22
101
0
26 Feb 2020
The Early Phase of Neural Network Training
Jonathan Frankle
D. Schwab
Ari S. Morcos
19
171
0
24 Feb 2020
Neuron Shapley: Discovering the Responsible Neurons
Amirata Ghorbani
James Y. Zou
FAtt
TDI
25
108
0
23 Feb 2020
Identifying Critical Neurons in ANN Architectures using Mixed Integer Programming
M. Elaraby
Guy Wolf
Margarida Carvalho
26
5
0
17 Feb 2020
A study of local optima for learning feature interactions using neural networks
Yangzi Guo
Adrian Barbu
16
1
0
11 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
48
271
0
03 Feb 2020
MEMO: A Deep Network for Flexible Combination of Episodic Memories
Andrea Banino
Adria Puigdomenech Badia
Raphael Köster
Martin Chadwick
V. Zambaldi
Demis Hassabis
Caswell Barry
M. Botvinick
D. Kumaran
Charles Blundell
KELM
26
33
0
29 Jan 2020
Progressive Local Filter Pruning for Image Retrieval Acceleration
Xiaodong Wang
Zhedong Zheng
Yang He
Fei Yan
Zhi-qiang Zeng
Yi Yang
25
34
0
24 Jan 2020
Filter Sketch for Network Pruning
Mingbao Lin
Liujuan Cao
Shaojie Li
QiXiang Ye
Yonghong Tian
Jianzhuang Liu
Q. Tian
Rongrong Ji
CLIP
3DPC
23
82
0
23 Jan 2020
Convolutional Neural Networks as a Model of the Visual System: Past, Present, and Future
Grace W. Lindsay
MedIm
35
423
0
20 Jan 2020
Least squares binary quantization of neural networks
Hadi Pouransari
Zhucheng Tu
Oncel Tuzel
MQ
17
32
0
09 Jan 2020
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
32
72
0
07 Jan 2020
Lossless Compression of Deep Neural Networks
Thiago Serra
Abhinav Kumar
Srikumar Ramalingam
24
56
0
01 Jan 2020
Mixed-Precision Quantized Neural Network with Progressively Decreasing Bitwidth For Image Classification and Object Detection
Tianshu Chu
Qin Luo
Jie-jin Yang
Xiaolin Huang
MQ
16
6
0
29 Dec 2019
QKD: Quantization-aware Knowledge Distillation
Jangho Kim
Yash Bhalgat
Jinwon Lee
Chirag I. Patel
Nojun Kwak
MQ
16
63
0
28 Nov 2019
Learning Sparse Sharing Architectures for Multiple Tasks
Tianxiang Sun
Yunfan Shao
Xiaonan Li
Pengfei Liu
Hang Yan
Xipeng Qiu
Xuanjing Huang
MoE
27
128
0
12 Nov 2019
Active Subspace of Neural Networks: Structural Analysis and Universal Attacks
Chunfeng Cui
Kaiqi Zhang
Talgat Daulbaev
Julia Gusak
Ivan V. Oseledets
Zheng-Wei Zhang
AAML
24
25
0
29 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
S. Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
37
6,110
0
22 Oct 2019
Spiking neural networks trained with backpropagation for low power neuromorphic implementation of voice activity detection
Flavio Martinelli
Giorgia Dellaferrera
Pablo Mainar
Milos Cernak
19
29
0
22 Oct 2019
How does topology influence gradient propagation and model performance of deep networks with DenseNet-type skip connections?
Kartikeya Bhardwaj
Guihong Li
R. Marculescu
30
1
0
02 Oct 2019
Optimizing Speech Recognition For The Edge
Yuan Shangguan
Jian Li
Qiao Liang
R. Álvarez
Ian McGraw
20
64
0
26 Sep 2019
Model Pruning Enables Efficient Federated Learning on Edge Devices
Yuang Jiang
Shiqiang Wang
Victor Valls
Bongjun Ko
Wei-Han Lee
Kin K. Leung
Leandros Tassiulas
30
444
0
26 Sep 2019
RNN Architecture Learning with Sparse Regularization
Jesse Dodge
Roy Schwartz
Hao Peng
Noah A. Smith
20
10
0
06 Sep 2019
Image Captioning with Sparse Recurrent Neural Network
J. Tan
Chee Seng Chan
Joon Huang Chuah
VLM
21
6
0
28 Aug 2019
A deep-learning-based surrogate model for data assimilation in dynamic subsurface flow problems
Meng Tang
Yimin Liu
L. Durlofsky
AI4CE
21
257
0
16 Aug 2019
Convolutional Dictionary Learning in Hierarchical Networks
Javier Zazo
Bahareh Tolooshams
Demba E. Ba
BDL
29
5
0
23 Jul 2019
Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks
Alejandro Molina
P. Schramowski
Kristian Kersting
ODL
23
77
0
15 Jul 2019
Bringing Giant Neural Networks Down to Earth with Unlabeled Data
Yehui Tang
Shan You
Chang Xu
Boxin Shi
Chao Xu
16
11
0
13 Jul 2019
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers
Luke Zettlemoyer
20
333
0
10 Jul 2019
XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera
Dushyant Mehta
Oleksandr Sotnychenko
Franziska Mueller
Weipeng Xu
Mohamed A. Elgharib
Pascal Fua
Hans-Peter Seidel
Helge Rhodin
Gerard Pons-Moll
Christian Theobalt
3DH
18
168
0
01 Jul 2019
Weight Agnostic Neural Networks
Adam Gaier
David R Ha
OOD
30
239
0
11 Jun 2019
SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
Igor Fedorov
Ryan P. Adams
Matthew Mattina
P. Whatmough
13
164
0
28 May 2019
How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
Subutai Ahmad
Luiz Scheinkman
25
96
0
27 Mar 2019
Convolution with even-sized kernels and symmetric padding
Shuang Wu
Guanrui Wang
Pei Tang
F. Chen
Luping Shi
14
67
0
20 Mar 2019
Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers
Baihan Lin
16
2
0
27 Feb 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
29
307
0
15 Feb 2019
Intrinsically Sparse Long Short-Term Memory Networks
Shiwei Liu
D. Mocanu
Mykola Pechenizkiy
22
9
0
26 Jan 2019
Structured Pruning of Neural Networks with Budget-Aware Regularization
Carl Lemaire
Andrew Achkar
Pierre-Marc Jodoin
27
92
0
23 Nov 2018
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
10
1,449
0
11 Oct 2018
Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle
Rana Ali Amjad
Bernhard C. Geiger
32
195
0
27 Feb 2018
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,138
0
06 Jun 2015
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,636
0
03 Jul 2012
Previous
1
2
3
...
10
11
12