Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.09081
Cited By
Progressive Skeletonization: Trimming more fat from a network at initialization
16 June 2020
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip H. S. Torr
Grégory Rogez
P. Dokania
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Progressive Skeletonization: Trimming more fat from a network at initialization"
50 / 60 papers shown
Title
Hyperflows: Pruning Reveals the Importance of Weights
Eugen Barbulescu
Antonio Alexoaie
21
0
0
06 Apr 2025
Advancing Weight and Channel Sparsification with Enhanced Saliency
Xinglong Sun
Maying Shen
Hongxu Yin
Lei Mao
Pavlo Molchanov
Jose M. Alvarez
46
1
0
05 Feb 2025
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Andy Li
A. Durrant
Milan Markovic
Lu Yin
Georgios Leontidis
Tianlong Chen
Lu Yin
Georgios Leontidis
72
0
0
20 Nov 2024
Electrostatic Force Regularization for Neural Structured Pruning
Abdesselam Ferdi
A. Taleb-Ahmed
A. Nakib
Youcef Ferdi
76
1
0
17 Nov 2024
OStr-DARTS: Differentiable Neural Architecture Search based on Operation Strength
Le Yang
Ziwei Zheng
Yizeng Han
Shiji Song
Gao Huang
Fan Li
21
1
0
22 Sep 2024
OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition
Stephen Zhang
V. Papyan
VLM
45
1
0
20 Sep 2024
Sparsest Models Elude Pruning: An Exposé of Pruning's Current Capabilities
Stephen Zhang
V. Papyan
25
0
0
04 Jul 2024
FedMap: Iterative Magnitude-Based Pruning for Communication-Efficient Federated Learning
Alexander Herzog
Robbie Southam
Ioannis Mavromatis
Aftab Khan
31
0
0
27 Jun 2024
Finding Lottery Tickets in Vision Models via Data-driven Spectral Foresight Pruning
Leonardo Iurada
Marco Ciccone
Tatiana Tommasi
36
3
0
03 Jun 2024
Effective Subset Selection Through The Lens of Neural Network Pruning
Noga Bar
Raja Giryes
CVBM
36
0
0
03 Jun 2024
Sparse maximal update parameterization: A holistic approach to sparse training dynamics
Nolan Dey
Shane Bergsma
Joel Hestness
25
5
0
24 May 2024
Unmasking Efficiency: Learning Salient Sparse Models in Non-IID Federated Learning
Riyasat Ohib
Bishal Thapaliya
Gintare Karolina Dziugaite
Jingyu Liu
Vince D. Calhoun
Sergey Plis
FedML
14
1
0
15 May 2024
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks
Xue Geng
Zhe Wang
Chunyun Chen
Qing Xu
Kaixin Xu
...
Zhenghua Chen
M. Aly
Jie Lin
Min-man Wu
Xiaoli Li
33
1
0
09 May 2024
ONNXPruner: ONNX-Based General Model Pruning Adapter
Dongdong Ren
Wenbin Li
Tianyu Ding
Lei Wang
Qi Fan
Jing Huo
Hongbing Pan
Yang Gao
29
3
0
10 Apr 2024
Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning
Emre Ozfatura
Kerem Ozfatura
Alptekin Kupcu
Deniz Gunduz
AAML
30
0
0
09 Apr 2024
MULTIFLOW: Shifting Towards Task-Agnostic Vision-Language Pruning
Matteo Farina
Massimiliano Mancini
Elia Cunegatti
Gaowen Liu
Giovanni Iacca
Elisa Ricci
VLM
40
2
0
08 Apr 2024
DRIVE: Dual Gradient-Based Rapid Iterative Pruning
Dhananjay Saikumar
Blesson Varghese
22
0
0
01 Apr 2024
SEVEN: Pruning Transformer Model by Reserving Sentinels
Jinying Xiao
Ping Li
Jie Nie
Zhe Tang
31
3
0
19 Mar 2024
No Free Prune: Information-Theoretic Barriers to Pruning at Initialization
Tanishq Kumar
Kevin Luo
Mark Sellke
33
3
0
02 Feb 2024
EPSD: Early Pruning with Self-Distillation for Efficient Model Compression
Dong Chen
Ning Liu
Yichen Zhu
Zhengping Che
Rui Ma
Fachao Zhang
Xiaofeng Mou
Yi Chang
Jian Tang
29
3
0
31 Jan 2024
LEMON: Lossless model expansion
Yite Wang
Jiahao Su
Hanlin Lu
Cong Xie
Tianyi Liu
Jianbo Yuan
Haibin Lin
Ruoyu Sun
Hongxia Yang
17
12
0
12 Oct 2023
Homological Convolutional Neural Networks
Antonio Briola
Yuanrong Wang
Silvia Bartolucci
T. Aste
LMTD
25
5
0
26 Aug 2023
Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks
Kaixin Xu
Zhe Wang
Xue Geng
Jie Lin
Min-man Wu
Xiaoli Li
Weisi Lin
18
15
0
21 Aug 2023
Adaptive Sharpness-Aware Pruning for Robust Sparse Networks
Anna Bair
Hongxu Yin
Maying Shen
Pavlo Molchanov
J. Álvarez
35
10
0
25 Jun 2023
Resource Efficient Neural Networks Using Hessian Based Pruning
J. Chong
Manas Gupta
Lihui Chen
14
2
0
12 Jun 2023
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
Ajay Jaiswal
Shiwei Liu
Tianlong Chen
Zhangyang Wang
VLM
21
33
0
06 Jun 2023
Pruning at Initialization -- A Sketching Perspective
Noga Bar
Raja Giryes
16
1
0
27 May 2023
Understanding Sparse Neural Networks from their Topology via Multipartite Graph Representations
Elia Cunegatti
Matteo Farina
Doina Bucur
Giovanni Iacca
35
1
0
26 May 2023
SalientGrads: Sparse Models for Communication Efficient and Data Aware Distributed Federated Training
Riyasat Ohib
Bishal Thapaliya
Pratyush Gaggenapalli
J. Liu
Vince D. Calhoun
Sergey Plis
FedML
18
2
0
15 Apr 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
28
19
0
06 Apr 2023
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Vithursan Thangarasa
Shreyas Saxena
Abhay Gupta
Sean Lie
23
3
0
21 Mar 2023
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Shiwei Liu
Tianlong Chen
Zhenyu (Allen) Zhang
Xuxi Chen
Tianjin Huang
Ajay Jaiswal
Zhangyang Wang
26
29
0
03 Mar 2023
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
32
9
0
28 Feb 2023
When Layers Play the Lottery, all Tickets Win at Initialization
Artur Jordão
George Correa de Araujo
H. Maia
Hélio Pedrini
13
3
0
25 Jan 2023
Efficient Stein Variational Inference for Reliable Distribution-lossless Network Pruning
Yingchun Wang
Song Guo
Jingcai Guo
Weizhan Zhang
Yi Tian Xu
Jiewei Zhang
Yi Liu
8
17
0
07 Dec 2022
Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off
Shaoyi Huang
Bowen Lei
Dongkuan Xu
Hongwu Peng
Yue Sun
Mimi Xie
Caiwen Ding
18
19
0
30 Nov 2022
Soft Masking for Cost-Constrained Channel Pruning
Ryan Humble
Maying Shen
J. Latorre
Eric Darve1
J. Álvarez
12
13
0
04 Nov 2022
Structural Pruning via Latency-Saliency Knapsack
Maying Shen
Hongxu Yin
Pavlo Molchanov
Lei Mao
Jianna Liu
J. Álvarez
37
47
0
13 Oct 2022
Why Random Pruning Is All We Need to Start Sparse
Advait Gadhikar
Sohom Mukherjee
R. Burkholz
41
19
0
05 Oct 2022
Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude Pruning
Manas Gupta
Efe Camci
Vishandi Rudy Keneta
Abhishek Vaidyanathan
Ritwik Kanodia
Chuan-Sheng Foo
Wu Min
Lin Jie
25
14
0
29 Sep 2022
Neural Network Panning: Screening the Optimal Sparse Network Before Training
Xiatao Kang
P. Li
Jiayi Yao
Chengxi Li
VLM
18
1
0
27 Sep 2022
Optimizing Connectivity through Network Gradients for Restricted Boltzmann Machines
A. C. N. D. Oliveira
Daniel R. Figueiredo
22
0
0
14 Sep 2022
One-shot Network Pruning at Initialization with Discriminative Image Patches
Yinan Yang
Yu Wang
Yi Ji
Heng Qi
Jien Kato
VLM
26
4
0
13 Sep 2022
Winning the Lottery Ahead of Time: Efficient Early Network Pruning
John Rachwan
Daniel Zügner
Bertrand Charpentier
Simon Geisler
Morgane Ayle
Stephan Günnemann
17
24
0
21 Jun 2022
Pruning for Feature-Preserving Circuits in CNNs
Christopher Hamblin
Talia Konkle
G. Alvarez
10
2
0
03 Jun 2022
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey
Paul Wimmer
Jens Mehnert
A. P. Condurache
DD
34
20
0
17 May 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
28
11
0
06 Apr 2022
Interspace Pruning: Using Adaptive Filter Representations to Improve Training of Sparse CNNs
Paul Wimmer
Jens Mehnert
A. P. Condurache
CVBM
14
20
0
15 Mar 2022
Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients
Milad Alizadeh
Shyam A. Tailor
L. Zintgraf
Joost R. van Amersfoort
Sebastian Farquhar
Nicholas D. Lane
Y. Gal
31
40
0
16 Feb 2022
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Li Shen
D. Mocanu
Zhangyang Wang
Mykola Pechenizkiy
11
106
0
05 Feb 2022
1
2
Next