ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.02164
  4. Cited By
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity
  Allocation

DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation

5 April 2020
Xuefei Ning
Tianchen Zhao
Wenshuo Li
Peng Lei
Yu Wang
Huazhong Yang
ArXivPDFHTML

Papers citing "DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation"

15 / 15 papers shown
Title
Advancing Weight and Channel Sparsification with Enhanced Saliency
Advancing Weight and Channel Sparsification with Enhanced Saliency
Xinglong Sun
Maying Shen
Hongxu Yin
Lei Mao
Pavlo Molchanov
Jose M. Alvarez
46
1
0
05 Feb 2025
Playing the Lottery With Concave Regularizers for Sparse Trainable Neural Networks
Playing the Lottery With Concave Regularizers for Sparse Trainable Neural Networks
Giulia Fracastoro
Sophie M. Fosson
Andrea Migliorati
G. Calafiore
40
1
0
19 Jan 2025
Boosting Convolutional Neural Networks with Middle Spectrum Grouped
  Convolution
Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution
Z. Su
Jiehua Zhang
Tianpeng Liu
Zhen Liu
Shuanghui Zhang
M. Pietikäinen
Li Liu
24
2
0
13 Apr 2023
TDC: Towards Extremely Efficient CNNs on GPUs via Hardware-Aware Tucker
  Decomposition
TDC: Towards Extremely Efficient CNNs on GPUs via Hardware-Aware Tucker Decomposition
Lizhi Xiang
Miao Yin
Chengming Zhang
Aravind Sukumaran-Rajam
P. Sadayappan
Bo Yuan
Dingwen Tao
3DV
18
8
0
07 Nov 2022
Cut Inner Layers: A Structured Pruning Strategy for Efficient U-Net GANs
Cut Inner Layers: A Structured Pruning Strategy for Efficient U-Net GANs
Bo-Kyeong Kim
Shinkook Choi
Hancheol Park
13
4
0
29 Jun 2022
Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter
  Pruning
Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning
Seunghyun Lee
B. Song
19
8
0
05 Mar 2022
GhostNets on Heterogeneous Devices via Cheap Operations
GhostNets on Heterogeneous Devices via Cheap Operations
Kai Han
Yunhe Wang
Chang Xu
Jianyuan Guo
Chunjing Xu
Enhua Wu
Qi Tian
19
102
0
10 Jan 2022
Batch Normalization Tells You Which Filter is Important
Batch Normalization Tells You Which Filter is Important
Junghun Oh
Heewon Kim
Sungyong Baik
Chee Hong
Kyoung Mu Lee
CVBM
21
8
0
02 Dec 2021
Self-supervised Feature-Gate Coupling for Dynamic Network Pruning
Self-supervised Feature-Gate Coupling for Dynamic Network Pruning
Mengnan Shi
Chang-rui Liu
Jianbin Jiao
QiXiang Ye
19
1
0
29 Nov 2021
Differentiable Network Pruning for Microcontrollers
Differentiable Network Pruning for Microcontrollers
Edgar Liberis
Nicholas D. Lane
10
18
0
15 Oct 2021
Architecture Aware Latency Constrained Sparse Neural Networks
Architecture Aware Latency Constrained Sparse Neural Networks
Tianli Zhao
Qinghao Hu
Xiangyu He
Weixiang Xu
Jiaxing Wang
Cong Leng
Jian Cheng
28
0
0
01 Sep 2021
Carrying out CNN Channel Pruning in a White Box
Carrying out CNN Channel Pruning in a White Box
Yu-xin Zhang
Mingbao Lin
Chia-Wen Lin
Jie Chen
Feiyue Huang
Yongjian Wu
Yonghong Tian
R. Ji
VLM
33
58
0
24 Apr 2021
Auto Graph Encoder-Decoder for Neural Network Pruning
Auto Graph Encoder-Decoder for Neural Network Pruning
Sixing Yu
Arya Mazaheri
Ali Jannesari
GNN
19
38
0
25 Nov 2020
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for
  Network Compression
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression
Yawei Li
Shuhang Gu
Christoph Mayer
Luc Van Gool
Radu Timofte
123
1
0
19 Mar 2020
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile
  Applications
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications
Tien-Ju Yang
Andrew G. Howard
Bo Chen
Xiao Zhang
Alec Go
Mark Sandler
Vivienne Sze
Hartwig Adam
88
515
0
09 Apr 2018
1