Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1911.13299
Cited By
What's Hidden in a Randomly Weighted Neural Network?
29 November 2019
Vivek Ramanujan
Mitchell Wortsman
Aniruddha Kembhavi
Ali Farhadi
Mohammad Rastegari
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What's Hidden in a Randomly Weighted Neural Network?"
44 / 94 papers shown
Title
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
46
48
0
09 Mar 2022
Extracting Effective Subnetworks with Gumbel-Softmax
Robin Dupont
M. Alaoui
H. Sahbi
A. Lebois
22
6
0
25 Feb 2022
Rare Gems: Finding Lottery Tickets at Initialization
Kartik K. Sreenivasan
Jy-yong Sohn
Liu Yang
Matthew Grinde
Alliot Nagle
Hongyi Wang
Eric P. Xing
Kangwook Lee
Dimitris Papailiopoulos
32
42
0
24 Feb 2022
Bit-wise Training of Neural Network Weights
Cristian Ivan
MQ
18
0
0
19 Feb 2022
Deadwooding: Robust Global Pruning for Deep Neural Networks
Sawinder Kaur
Ferdinando Fioretto
Asif Salekin
32
4
0
10 Feb 2022
Robust Binary Models by Pruning Randomly-initialized Networks
Chen Liu
Ziqi Zhao
Sabine Süsstrunk
Mathieu Salzmann
TPM
AAML
MQ
32
4
0
03 Feb 2022
Signing the Supermask: Keep, Hide, Invert
Nils Koster
O. Grothe
Achim Rettinger
36
10
0
31 Jan 2022
Neural Network Module Decomposition and Recomposition
Hiroaki Kingetsu
Kenichi Kobayashi
Taiji Suzuki
27
10
0
25 Dec 2021
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
Yuege Xie
Bobby Shi
Hayden Schaeffer
Rachel A. Ward
88
9
0
07 Dec 2021
Hidden-Fold Networks: Random Recurrent Residuals Using Sparse Supermasks
Ángel López García-Arias
Masanori Hashimoto
Masato Motomura
Jaehoon Yu
39
5
0
24 Nov 2021
Efficient Neural Network Training via Forward and Backward Propagation Sparsification
Xiao Zhou
Weizhong Zhang
Zonghao Chen
Shizhe Diao
Tong Zhang
40
46
0
10 Nov 2021
Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks
Hassan Dbouk
Naresh R Shanbhag
AAML
21
7
0
28 Oct 2021
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Yonggan Fu
Qixuan Yu
Yang Zhang
Shan-Hung Wu
Ouyang Xu
David D. Cox
Yingyan Lin
AAML
OOD
33
29
0
26 Oct 2021
Lottery Tickets with Nonzero Biases
Jonas Fischer
Advait Gadhikar
R. Burkholz
27
6
0
21 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
31
13
0
12 Oct 2021
Block Pruning For Faster Transformers
François Lagunas
Ella Charlaix
Victor Sanh
Alexander M. Rush
VLM
33
219
0
10 Sep 2021
What's Hidden in a One-layer Randomly Weighted Transformer?
Sheng Shen
Z. Yao
Douwe Kiela
Kurt Keutzer
Michael W. Mahoney
34
4
0
08 Sep 2021
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
James Diffenderfer
Brian Bartoldson
Shreya Chaganti
Jize Zhang
B. Kailkhura
OOD
31
69
0
16 Jun 2021
Structured Ensembles: an Approach to Reduce the Memory Footprint of Ensemble Methods
Jary Pomponi
Simone Scardapane
A. Uncini
UQCV
49
7
0
06 May 2021
Effective Sparsification of Neural Networks with Global Sparsity Constraint
Xiao Zhou
Weizhong Zhang
Hang Xu
Tong Zhang
21
61
0
03 May 2021
Lottery Jackpots Exist in Pre-trained Models
Yuxin Zhang
Mingbao Lin
Yan Wang
Rongrong Ji
Rongrong Ji
35
15
0
18 Apr 2021
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
James Diffenderfer
B. Kailkhura
MQ
35
75
0
17 Mar 2021
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
38
64
0
11 Mar 2021
Knowledge Evolution in Neural Networks
Ahmed Taha
Abhinav Shrivastava
L. Davis
49
21
0
09 Mar 2021
Reservoir Transformers
Sheng Shen
Alexei Baevski
Ari S. Morcos
Kurt Keutzer
Michael Auli
Douwe Kiela
35
17
0
30 Dec 2020
FreezeNet: Full Performance by Reduced Storage Costs
Paul Wimmer
Jens Mehnert
Alexandru Paul Condurache
33
13
0
28 Nov 2020
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Utku Evci
Yani Andrew Ioannou
Cem Keskin
Yann N. Dauphin
42
87
0
07 Oct 2020
Against Membership Inference Attack: Pruning is All You Need
Yijue Wang
Chenghong Wang
Zigeng Wang
Shangli Zhou
Hang Liu
J. Bi
Caiwen Ding
Sanguthevar Rajasekaran
MIACV
25
48
0
28 Aug 2020
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization?
Yaniv Blumenfeld
D. Gilboa
Daniel Soudry
ODL
30
13
0
02 Jul 2020
Training highly effective connectivities within neural networks with randomly initialized, fixed weights
Cristian Ivan
Razvan V. Florian
27
4
0
30 Jun 2020
Supermasks in Superposition
Mitchell Wortsman
Vivek Ramanujan
Rosanne Liu
Aniruddha Kembhavi
Mohammad Rastegari
J. Yosinski
Ali Farhadi
SSL
CLL
33
281
0
26 Jun 2020
Principal Component Networks: Parameter Reduction Early in Training
R. Waleffe
Theodoros Rekatsinas
3DPC
19
9
0
23 Jun 2020
What shapes feature representations? Exploring datasets, architectures, and training
Katherine L. Hermann
Andrew Kyle Lampinen
OOD
23
154
0
22 Jun 2020
Logarithmic Pruning is All You Need
Laurent Orseau
Marcus Hutter
Omar Rivasplata
28
88
0
22 Jun 2020
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient
Ankit Pensia
Shashank Rajput
Alliot Nagle
Harit Vishwakarma
Dimitris Papailiopoulos
24
103
0
14 Jun 2020
An Overview of Neural Network Compression
James OÑeill
AI4CE
47
98
0
05 Jun 2020
Movement Pruning: Adaptive Sparsity by Fine-Tuning
Victor Sanh
Thomas Wolf
Alexander M. Rush
32
472
0
15 May 2020
CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Wenyu Zhang
Skyler Seto
Devesh K. Jha
22
5
0
26 Mar 2020
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Jonathan Frankle
D. Schwab
Ari S. Morcos
20
140
0
29 Feb 2020
Deep Randomized Neural Networks
Claudio Gallicchio
Simone Scardapane
OOD
45
61
0
27 Feb 2020
Identifying Critical Neurons in ANN Architectures using Mixed Integer Programming
M. Elaraby
Guy Wolf
Margarida Carvalho
26
5
0
17 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
64
272
0
03 Feb 2020
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
MoMe
43
601
0
11 Dec 2019
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
274
5,331
0
05 Nov 2016
Previous
1
2