Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.05467
Cited By
Pruning neural networks without any data by iteratively conserving synaptic flow
9 June 2020
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pruning neural networks without any data by iteratively conserving synaptic flow"
48 / 98 papers shown
Title
Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Yushu Wu
Yifan Gong
Pu Zhao
Yanyu Li
Zheng Zhan
Wei Niu
Hao Tang
Minghai Qin
Bin Ren
Yanzhi Wang
SupR
MQ
29
23
0
25 Jul 2022
Zeroth-Order Topological Insights into Iterative Magnitude Pruning
Aishwarya H. Balwani
J. Krzyston
26
2
0
14 Jun 2022
Energy Consumption Analysis of pruned Semantic Segmentation Networks on an Embedded GPU
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
David Bertrand
T. Hannagan
GNN
SSeg
3DPC
25
2
0
13 Jun 2022
Leveraging Structured Pruning of Convolutional Neural Networks
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
David Bertrand
T. Hannagan
CVBM
19
1
0
13 Jun 2022
Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm
Aidan Good
Jia-Huei Lin
Hannah Sieg
Mikey Ferguson
Xin Yu
Shandian Zhe
J. Wieczorek
Thiago Serra
29
11
0
07 Jun 2022
Machine Learning for Microcontroller-Class Hardware: A Review
Swapnil Sayan Saha
S. Sandha
Mani B. Srivastava
24
118
0
29 May 2022
Convolutional and Residual Networks Provably Contain Lottery Tickets
R. Burkholz
UQCV
MLT
35
13
0
04 May 2022
Most Activation Functions Can Win the Lottery Without Excessive Depth
R. Burkholz
MLT
69
18
0
04 May 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
28
11
0
06 Apr 2022
Monarch: Expressive Structured Matrices for Efficient and Accurate Training
Tri Dao
Beidi Chen
N. Sohoni
Arjun D Desai
Michael Poli
Jessica Grogan
Alexander Liu
Aniruddh Rao
Atri Rudra
Christopher Ré
22
87
0
01 Apr 2022
Automated Progressive Learning for Efficient Training of Vision Transformers
Changlin Li
Bohan Zhuang
Guangrun Wang
Xiaodan Liang
Xiaojun Chang
Yi Yang
26
46
0
28 Mar 2022
Training-free Transformer Architecture Search
Qinqin Zhou
Kekai Sheng
Xiawu Zheng
Ke Li
Xing Sun
Yonghong Tian
Jie Chen
Rongrong Ji
ViT
32
46
0
23 Mar 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
39
48
0
09 Mar 2022
Structured Pruning is All You Need for Pruning CNNs at Initialization
Yaohui Cai
Weizhe Hua
Hongzheng Chen
G. E. Suh
Christopher De Sa
Zhiru Zhang
CVBM
39
14
0
04 Mar 2022
Extracting Effective Subnetworks with Gumbel-Softmax
Robin Dupont
M. Alaoui
H. Sahbi
A. Lebois
14
6
0
25 Feb 2022
Rare Gems: Finding Lottery Tickets at Initialization
Kartik K. Sreenivasan
Jy-yong Sohn
Liu Yang
Matthew Grinde
Alliot Nagle
Hongyi Wang
Eric P. Xing
Kangwook Lee
Dimitris Papailiopoulos
22
42
0
24 Feb 2022
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Tianlong Chen
Zhenyu (Allen) Zhang
Pengju Wang
Santosh Balachandra
Haoyu Ma
Zehao Wang
Zhangyang Wang
OOD
AAML
79
46
0
20 Feb 2022
Exact Solutions of a Deep Linear Network
Liu Ziyin
Botao Li
Xiangmin Meng
ODL
19
21
0
10 Feb 2022
Approximating Full Conformal Prediction at Scale via Influence Functions
Javier Abad
Umang Bhatt
Adrian Weller
Giovanni Cherubin
31
10
0
02 Feb 2022
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Tri Dao
Beidi Chen
Kaizhao Liang
Jiaming Yang
Zhao-quan Song
Atri Rudra
Christopher Ré
33
75
0
30 Nov 2021
MAE-DET: Revisiting Maximum Entropy Principle in Zero-Shot NAS for Efficient Object Detection
Zhenhong Sun
Ming Lin
Xiuyu Sun
Zhiyu Tan
Hao Li
Rong Jin
23
32
0
26 Nov 2021
Meta-Learning Sparse Implicit Neural Representations
Jaehoon Lee
Jihoon Tack
Namhoon Lee
Jinwoo Shin
22
44
0
27 Oct 2021
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Geng Yuan
Xiaolong Ma
Wei Niu
Zhengang Li
Zhenglun Kong
...
Minghai Qin
Bin Ren
Yanzhi Wang
Sijia Liu
Xue Lin
15
89
0
26 Oct 2021
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
Yang Sui
Miao Yin
Yi Xie
Huy Phan
S. Zonouz
Bo Yuan
VLM
30
128
0
26 Oct 2021
When to Prune? A Policy towards Early Structural Pruning
Maying Shen
Pavlo Molchanov
Hongxu Yin
J. Álvarez
VLM
22
52
0
22 Oct 2021
Probabilistic fine-tuning of pruning masks and PAC-Bayes self-bounded learning
Soufiane Hayou
Bo He
Gintare Karolina Dziugaite
37
2
0
22 Oct 2021
Lottery Tickets with Nonzero Biases
Jonas Fischer
Advait Gadhikar
R. Burkholz
14
6
0
21 Oct 2021
ProxyBO: Accelerating Neural Architecture Search via Bayesian Optimization with Zero-cost Proxies
Yu Shen
Yang Li
Jian Zheng
Wentao Zhang
Peng Yao
Jixiang Li
Sen Yang
Ji Liu
Cui Bin
AI4CE
42
30
0
20 Oct 2021
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based Networks
Shiyu Liu
Chong Min John Tan
Mehul Motani
CLL
26
4
0
17 Oct 2021
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
43
27
0
30 Sep 2021
NASI: Label- and Data-agnostic Neural Architecture Search at Initialization
Yao Shu
Shaofeng Cai
Zhongxiang Dai
Beng Chin Ooi
K. H. Low
22
43
0
02 Sep 2021
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
OOD
28
49
0
28 Jun 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
34
111
0
19 Jun 2021
FEAR: A Simple Lightweight Method to Rank Architectures
Debadeepta Dey
Shital C. Shah
Sébastien Bubeck
OOD
22
4
0
07 Jun 2021
A brain basis of dynamical intelligence for AI and computational neuroscience
J. Monaco
Kanaka Rajan
Grace M. Hwang
AI4CE
26
6
0
15 May 2021
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
33
64
0
11 Mar 2021
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
Lucas Liebenwein
Cenk Baykal
Brandon Carter
David K Gifford
Daniela Rus
AAML
40
71
0
04 Mar 2021
Learning Neural Network Subspaces
Mitchell Wortsman
Maxwell Horton
Carlos Guestrin
Ali Farhadi
Mohammad Rastegari
UQCV
27
85
0
20 Feb 2021
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
26
25
0
20 Nov 2020
Low-Complexity Models for Acoustic Scene Classification Based on Receptive Field Regularization and Frequency Damping
Khaled Koutini
Florian Henkel
Hamid Eghbalzadeh
Gerhard Widmer
14
9
0
05 Nov 2020
Are wider nets better given the same number of parameters?
A. Golubeva
Behnam Neyshabur
Guy Gur-Ari
21
44
0
27 Oct 2020
Brain-Inspired Learning on Neuromorphic Substrates
Friedemann Zenke
Emre Neftci
38
87
0
22 Oct 2020
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Jingtong Su
Yihang Chen
Tianle Cai
Tianhao Wu
Ruiqi Gao
Liwei Wang
J. Lee
6
85
0
22 Sep 2020
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip H. S. Torr
Grégory Rogez
P. Dokania
31
95
0
16 Jun 2020
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
191
1,027
0
06 Mar 2020
On the Decision Boundaries of Neural Networks: A Tropical Geometry Perspective
Motasem Alfarra
Adel Bibi
Hasan Hammoud
M. Gaafar
Bernard Ghanem
11
26
0
20 Feb 2020
Model Pruning Enables Efficient Federated Learning on Edge Devices
Yuang Jiang
Shiqiang Wang
Victor Valls
Bongjun Ko
Wei-Han Lee
Kin K. Leung
Leandros Tassiulas
30
444
0
26 Sep 2019
Previous
1
2