Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.02340
Cited By
SNIP: Single-shot Network Pruning based on Connection Sensitivity
4 October 2018
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SNIP: Single-shot Network Pruning based on Connection Sensitivity"
50 / 709 papers shown
Title
Efficient Model Adaptation for Continual Learning at the Edge
Z. Daniels
Jun Hu
M. Lomnitz
P.E.T.E.R.G. Miller
Aswin Raghavan
Joe Zhang
M. Piacentino
David C. Zhang
OOD
27
2
0
03 Aug 2023
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Denis Kuznedelev
Eldar Kurtic
Eugenia Iofinova
Elias Frantar
Alexandra Peste
Dan Alistarh
VLM
35
11
0
03 Aug 2023
YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems
Ivan Lazarevich
Matteo Grimaldi
Ravi Kumar
Saptarshi Mitra
Shahrukh Khan
Sudhakar Sah
ObjD
FedML
ELM
34
10
0
26 Jul 2023
EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization
Peijie Dong
Lujun Li
Zimian Wei
Xin-Yi Niu
Zhiliang Tian
H. Pan
MQ
51
28
0
20 Jul 2023
An Evaluation of Zero-Cost Proxies -- from Neural Architecture Performance to Model Robustness
Jovita Lukasik
Michael Moeller
M. Keuper
30
1
0
18 Jul 2023
UPSCALE: Unconstrained Channel Pruning
Alvin Wan
Hanxiang Hao
K. Patnaik
Yueyang Xu
Omer Hadad
David Guera
Zhile Ren
Qi Shan
34
4
0
17 Jul 2023
Pruning vs Quantization: Which is Better?
Andrey Kuzmin
Markus Nagel
M. V. Baalen
Arash Behboodi
Tijmen Blankevoort
MQ
27
48
0
06 Jul 2023
Zero-Shot Neural Architecture Search: Challenges, Solutions, and Opportunities
Guihong Li
Duc-Tuong Hoang
Kartikeya Bhardwaj
Ming Lin
Zhangyang Wang
R. Marculescu
46
11
0
05 Jul 2023
AutoST: Training-free Neural Architecture Search for Spiking Transformers
Ziqing Wang
Qidong Zhao
Jinku Cui
Xu Liu
Dongkuan Xu
25
5
0
01 Jul 2023
Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging
Max Zimmer
Christoph Spiegel
Sebastian Pokutta
MoMe
46
14
0
29 Jun 2023
Homological Neural Networks: A Sparse Architecture for Multivariate Complexity
Yuanrong Wang
Antonio Briola
T. Aste
46
6
0
27 Jun 2023
Adaptive Sharpness-Aware Pruning for Robust Sparse Networks
Anna Bair
Hongxu Yin
Maying Shen
Pavlo Molchanov
J. Álvarez
43
10
0
25 Jun 2023
Neural Network Pruning for Real-time Polyp Segmentation
Suman Sapkota
Pranav Poudel
Sudarshan Regmi
Bibek Panthi
Binod Bhattarai
MedIm
38
0
0
22 Jun 2023
Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse Training
A. Nowak
Bram Grooten
Decebal Constantin Mocanu
Jacek Tabor
33
9
0
21 Jun 2023
A Simple and Effective Pruning Approach for Large Language Models
Mingjie Sun
Zhuang Liu
Anna Bair
J. Zico Kolter
87
361
0
20 Jun 2023
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models
A. Jaiswal
Shiwei Liu
Tianlong Chen
Ying Ding
Zhangyang Wang
VLM
44
22
0
18 Jun 2023
Transferability of Winning Lottery Tickets in Neural Network Differential Equation Solvers
Edward Prideaux-Ghee
40
0
0
16 Jun 2023
Improving Generalization in Meta-Learning via Meta-Gradient Augmentation
Ren Wang
Haoliang Sun
Qinglai Wei
Xiushan Nie
Yuling Ma
Yilong Yin
26
0
0
14 Jun 2023
Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision
Arturs Berzins
27
5
0
12 Jun 2023
Resource Efficient Neural Networks Using Hessian Based Pruning
J. Chong
Manas Gupta
Lihui Chen
22
3
0
12 Jun 2023
Spatial Re-parameterization for N:M Sparsity
Yuxin Zhang
Mingbao Lin
Mingliang Xu
Yonghong Tian
Rongrong Ji
46
2
0
09 Jun 2023
Magnitude Attention-based Dynamic Pruning
Jihye Back
Namhyuk Ahn
Jang-Hyun Kim
43
2
0
08 Jun 2023
Generalizable Lightweight Proxy for Robust NAS against Diverse Perturbations
Hyeonjeong Ha
Minseon Kim
Sung Ju Hwang
OOD
AAML
37
5
0
08 Jun 2023
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
Ajay Jaiswal
Shiwei Liu
Tianlong Chen
Zhangyang Wang
VLM
34
33
0
06 Jun 2023
Does a sparse ReLU network training problem always admit an optimum?
Quoc-Tung Le
E. Riccietti
Rémi Gribonval
19
2
0
05 Jun 2023
Diffused Redundancy in Pre-trained Representations
Vedant Nanda
Till Speicher
John P. Dickerson
S. Feizi
Krishna P. Gummadi
Adrian Weller
SSL
29
2
0
31 May 2023
Dynamic Sparsity Is Channel-Level Sparsity Learner
Lu Yin
Gen Li
Meng Fang
Lijuan Shen
Tianjin Huang
Zhangyang Wang
Vlado Menkovski
Xiaolong Ma
Mykola Pechenizkiy
Shiwei Liu
38
20
0
30 May 2023
Neural Sculpting: Uncovering hierarchically modular task structure in neural networks through pruning and network analysis
S. M. Patil
Loizos Michael
C. Dovrolis
39
0
0
28 May 2023
One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning
Guangtao Zeng
Peiyuan Zhang
Wei Lu
23
21
0
28 May 2023
Adaptive Sparsity Level during Training for Efficient Time Series Forecasting with Transformers
Zahra Atashgahi
Mykola Pechenizkiy
Raymond N. J. Veldhuis
Decebal Constantin Mocanu
AI4TS
AI4CE
34
1
0
28 May 2023
Pruning at Initialization -- A Sketching Perspective
Noga Bar
Raja Giryes
29
1
0
27 May 2023
Understanding Sparse Neural Networks from their Topology via Multipartite Graph Representations
Elia Cunegatti
Matteo Farina
Doina Bucur
Giovanni Iacca
39
1
0
26 May 2023
Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning
Moonseok Choi
Hyungi Lee
G. Nam
Juho Lee
40
2
0
24 May 2023
Masked Bayesian Neural Networks : Theoretical Guarantee and its Posterior Inference
Insung Kong
Dongyoon Yang
Jongjin Lee
Ilsang Ohn
Gyuseung Baek
Yongdai Kim
BDL
34
4
0
24 May 2023
Pruning Pre-trained Language Models with Principled Importance and Self-regularization
Siyu Ren
Kenny Q. Zhu
30
2
0
21 May 2023
Learning Activation Functions for Sparse Neural Networks
Mohammad Loni
Aditya Mohan
Mehdi Asadi
Marius Lindauer
27
4
0
18 May 2023
Adaptive Federated Pruning in Hierarchical Wireless Networks
Xiaonan Liu
Shiqiang Wang
Yansha Deng
A. Nallanathan
41
11
0
15 May 2023
Sparsified Model Zoo Twins: Investigating Populations of Sparsified Neural Network Models
D. Honegger
Konstantin Schurholt
Damian Borth
37
4
0
26 Apr 2023
Model Pruning Enables Localized and Efficient Federated Learning for Yield Forecasting and Data Sharing
An-dong Li
Milan Markovic
P. Edwards
Georgios Leontidis
FedML
35
16
0
19 Apr 2023
SalientGrads: Sparse Models for Communication Efficient and Data Aware Distributed Federated Training
Riyasat Ohib
Bishal Thapaliya
Pratyush Gaggenapalli
Qingbin Liu
Vince D. Calhoun
Sergey Plis
FedML
21
2
0
15 Apr 2023
AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
Abhisek Kundu
Naveen Mellempudi
Dharma Teja Vooturi
Bharat Kaul
Pradeep Dubey
39
1
0
14 Apr 2023
Structured Pruning for Multi-Task Deep Neural Networks
Siddhant Garg
Lijun Zhang
Hui Guan
19
1
0
13 Apr 2023
Model Sparsity Can Simplify Machine Unlearning
Jinghan Jia
Jiancheng Liu
Parikshit Ram
Yuguang Yao
Gaowen Liu
Yang Liu
Pranay Sharma
Sijia Liu
MU
36
108
0
11 Apr 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
42
19
0
06 Apr 2023
Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement
Xiang-yu Zhu
Renrui Zhang
Bowei He
A-Long Zhou
Dong Wang
Bingyan Zhao
Peng Gao
VLM
42
80
0
03 Apr 2023
DisWOT: Student Architecture Search for Distillation WithOut Training
Peijie Dong
Lujun Li
Zimian Wei
46
57
0
28 Mar 2023
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Vithursan Thangarasa
Shreyas Saxena
Abhay Gupta
Sean Lie
38
3
0
21 Mar 2023
Greedy Pruning with Group Lasso Provably Generalizes for Matrix Sensing
Nived Rajaraman
Devvrit
Aryan Mokhtari
Kannan Ramchandran
29
0
0
20 Mar 2023
Induced Feature Selection by Structured Pruning
Nathan Hubens
V. Delvigne
M. Mancas
B. Gosselin
Marius Preda
T. Zaharia
22
0
0
20 Mar 2023
SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models
Vithursan Thangarasa
Abhay Gupta
William Marshall
Tianda Li
Kevin Leong
D. DeCoste
Sean Lie
Shreyas Saxena
MoE
AI4CE
29
18
0
18 Mar 2023
Previous
1
2
3
...
5
6
7
...
13
14
15
Next