ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04493
  4. Cited By
Dynamic Network Surgery for Efficient DNNs
v1v2 (latest)

Dynamic Network Surgery for Efficient DNNs

16 August 2016
Yiwen Guo
Anbang Yao
Yurong Chen
ArXiv (abs)PDFHTMLGithub (186★)

Papers citing "Dynamic Network Surgery for Efficient DNNs"

50 / 359 papers shown
Title
Pruning Filter in Filter
Pruning Filter in FilterNeural Information Processing Systems (NeurIPS), 2025
Fanxu Meng
Hao Cheng
Ke Li
Huixiang Luo
Xiao-Wei Guo
Guangming Lu
Xing Sun
VLM
183
112
0
30 Sep 2020
AdderSR: Towards Energy Efficient Image Super-Resolution
AdderSR: Towards Energy Efficient Image Super-ResolutionComputer Vision and Pattern Recognition (CVPR), 2025
Dehua Song
Yunhe Wang
Hanting Chen
Chang Xu
Chunjing Xu
Dacheng Tao
SupR
294
89
0
18 Sep 2020
Holistic Filter Pruning for Efficient Deep Neural Networks
Holistic Filter Pruning for Efficient Deep Neural NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2025
Lukas Enderich
Fabian Timm
Wolfram Burgard
104
7
0
17 Sep 2020
Efficient Transformer-based Large Scale Language Representations using
  Hardware-friendly Block Structured Pruning
Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Bingbing Li
Zhenglun Kong
Tianyun Zhang
Ji Li
Hao Sun
Hang Liu
Caiwen Ding
VLM
297
65
0
17 Sep 2020
CNNPruner: Pruning Convolutional Neural Networks with Visual Analytics
CNNPruner: Pruning Convolutional Neural Networks with Visual Analytics
Guan Li
Junpeng Wang
Han-Wei Shen
Kaixin Chen
Guihua Shan
Zhonghua Lu
AAML
76
48
0
08 Sep 2020
Efficient and Sparse Neural Networks by Pruning Weights in a
  Multiobjective Learning Approach
Efficient and Sparse Neural Networks by Pruning Weights in a Multiobjective Learning Approach
Malena Reiners
K. Klamroth
Michael Stiglmayr
90
18
0
31 Aug 2020
HALO: Learning to Prune Neural Networks with Shrinkage
HALO: Learning to Prune Neural Networks with Shrinkage
Skyler Seto
M. Wells
Wenyu Zhang
131
0
0
24 Aug 2020
Towards Modality Transferable Visual Information Representation with
  Optimal Model Compression
Towards Modality Transferable Visual Information Representation with Optimal Model Compression
Rongqun Lin
Linwei Zhu
Shiqi Wang
Sam Kwong
92
2
0
13 Aug 2020
Growing Efficient Deep Networks by Structured Continuous Sparsification
Growing Efficient Deep Networks by Structured Continuous Sparsification
Xin Yuan
Pedro H. P. Savarese
Michael Maire
3DPC
95
49
0
30 Jul 2020
RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks
  on Mobile Devices
RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks on Mobile Devices
Wei Niu
Mengshu Sun
Hao Sun
Jou-An Chen
Jiexiong Guan
Xipeng Shen
Yanzhi Wang
Sijia Liu
Xue Lin
Bin Ren
MQ
109
12
0
20 Jul 2020
Joint Multi-User DNN Partitioning and Computational Resource Allocation
  for Collaborative Edge Intelligence
Joint Multi-User DNN Partitioning and Computational Resource Allocation for Collaborative Edge Intelligence
Xin Tang
Xu Chen
Liekang Zeng
Shuai Yu
Lin Chen
87
100
0
15 Jul 2020
ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting
ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting
Xiaohan Ding
Tianxiang Hao
Jianchao Tan
Ji Liu
Jungong Han
Yuchen Guo
Guiguang Ding
165
177
0
07 Jul 2020
Bespoke vs. Prêt-à-Porter Lottery Tickets: Exploiting Mask
  Similarity for Trainable Sub-Network Finding
Bespoke vs. Prêt-à-Porter Lottery Tickets: Exploiting Mask Similarity for Trainable Sub-Network Finding
Michela Paganini
Jessica Zosa Forde
UQCV
72
6
0
06 Jul 2020
ESPN: Extremely Sparse Pruned Networks
ESPN: Extremely Sparse Pruned Networks
Minsu Cho
Ameya Joshi
Chinmay Hegde
95
9
0
28 Jun 2020
Topological Insights into Sparse Neural Networks
Topological Insights into Sparse Neural Networks
Shiwei Liu
T. Lee
Anil Yaman
Zahra Atashgahi
David L. Ferraro
Ghada Sokar
Mykola Pechenizkiy
Decebal Constantin Mocanu
78
30
0
24 Jun 2020
Slimming Neural Networks using Adaptive Connectivity Scores
Slimming Neural Networks using Adaptive Connectivity Scores
Madan Ravi Ganesh
Dawsin Blanchard
Jason J. Corso
Salimeh Yasaei Sekeh
120
12
0
22 Jun 2020
Exploiting Weight Redundancy in CNNs: Beyond Pruning and Quantization
Exploiting Weight Redundancy in CNNs: Beyond Pruning and Quantization
Yuan Wen
David Gregg
MQ
79
3
0
22 Jun 2020
Progressive Skeletonization: Trimming more fat from a network at
  initialization
Progressive Skeletonization: Trimming more fat from a network at initializationInternational Conference on Learning Representations (ICLR), 2025
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Juil Sock
Grégory Rogez
P. Dokania
202
97
0
16 Jun 2020
Finding trainable sparse networks through Neural Tangent Transfer
Finding trainable sparse networks through Neural Tangent TransferInternational Conference on Machine Learning (ICML), 2024
Tianlin Liu
Friedemann Zenke
99
35
0
15 Jun 2020
O(1) Communication for Distributed SGD through Two-Level Gradient
  Averaging
O(1) Communication for Distributed SGD through Two-Level Gradient AveragingIEEE International Conference on Cluster Computing (CLUSTER), 2024
Subhadeep Bhattacharya
Weikuan Yu
Fahim Chowdhury
FedML
48
2
0
12 Jun 2020
Dynamic Model Pruning with Feedback
Dynamic Model Pruning with FeedbackInternational Conference on Learning Representations (ICLR), 2025
Tao R. Lin
Sebastian U. Stich
Luis Barba
Daniil Dmitriev
Martin Jaggi
194
214
0
12 Jun 2020
3D Point Cloud Feature Explanations Using Gradient-Based Methods
3D Point Cloud Feature Explanations Using Gradient-Based MethodsIEEE International Joint Conference on Neural Network (IJCNN), 2023
A. Gupta
Simon Watson
Hujun Yin
3DPC
66
29
0
09 Jun 2020
Pruning neural networks without any data by iteratively conserving
  synaptic flow
Pruning neural networks without any data by iteratively conserving synaptic flowNeural Information Processing Systems (NeurIPS), 2025
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
358
692
0
09 Jun 2020
A Framework for Neural Network Pruning Using Gibbs Distributions
A Framework for Neural Network Pruning Using Gibbs DistributionsGlobal Communications Conference (GLOBECOM), 2021
Alex Labach
S. Valaee
70
5
0
08 Jun 2020
Weight Pruning via Adaptive Sparsity Loss
Weight Pruning via Adaptive Sparsity Loss
George Retsinas
Athena Elafrou
G. Goumas
Petros Maragos
90
10
0
04 Jun 2020
Feature Statistics Guided Efficient Filter Pruning
Feature Statistics Guided Efficient Filter Pruning
Hang Li
Chen Ma
Wenyuan Xu
Xue Liu
84
33
0
21 May 2020
Joint Multi-Dimension Pruning via Numerical Gradient Update
Joint Multi-Dimension Pruning via Numerical Gradient Update
Zechun Liu
Xinming Zhang
Zhiqiang Shen
Zhe Li
Yichen Wei
Kwang-Ting Cheng
Jian Sun
92
19
0
18 May 2020
Dynamic Sparsity Neural Networks for Automatic Speech Recognition
Dynamic Sparsity Neural Networks for Automatic Speech Recognition
Zhaofeng Wu
Ding Zhao
Qiao Liang
Jiahui Yu
Anmol Gulati
Ruoming Pang
91
41
0
16 May 2020
Generalized Bayesian Posterior Expectation Distillation for Deep Neural
  Networks
Generalized Bayesian Posterior Expectation Distillation for Deep Neural Networks
Meet P. Vadera
B. Jalaeian
Benjamin M. Marlin
BDLFedMLUQCV
81
20
0
16 May 2020
Movement Pruning: Adaptive Sparsity by Fine-Tuning
Movement Pruning: Adaptive Sparsity by Fine-TuningNeural Information Processing Systems (NeurIPS), 2025
Victor Sanh
Thomas Wolf
Alexander M. Rush
188
513
0
15 May 2020
PENNI: Pruned Kernel Sharing for Efficient CNN Inference
PENNI: Pruned Kernel Sharing for Efficient CNN InferenceInternational Conference on Machine Learning (ICML), 2024
Shiyu Li
Edward Hanson
Xue Yang
Yiran Chen
78
20
0
14 May 2020
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With
  Trainable Masked Layers
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked LayersInternational Conference on Learning Representations (ICLR), 2025
Junjie Liu
Zhe Xu
Runbin Shi
R. Cheung
Hayden Kwok-Hay So
112
127
0
14 May 2020
Compact Neural Representation Using Attentive Network Pruning
Compact Neural Representation Using Attentive Network Pruning
Mahdi Biparva
John K. Tsotsos
CVBM
48
1
0
10 May 2020
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge
  Applications: A Survey
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey
Jiayi Liu
S. Tripathi
Unmesh Kurup
Mohak Shah
3DPCMedIm
91
54
0
08 May 2020
Data-Free Network Quantization With Adversarial Knowledge Distillation
Data-Free Network Quantization With Adversarial Knowledge Distillation
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
MQ
134
129
0
08 May 2020
Dependency Aware Filter Pruning
Dependency Aware Filter Pruning
Kai Zhao
Xinyu Zhang
Qi Han
Ming-Ming Cheng
72
3
0
06 May 2020
Successfully Applying the Stabilized Lottery Ticket Hypothesis to the
  Transformer Architecture
Successfully Applying the Stabilized Lottery Ticket Hypothesis to the Transformer ArchitectureAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Christopher Brix
Parnia Bahar
Hermann Ney
140
38
0
04 May 2020
DeFormer: Decomposing Pre-trained Transformers for Faster Question
  Answering
DeFormer: Decomposing Pre-trained Transformers for Faster Question AnsweringAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Qingqing Cao
H. Trivedi
A. Balasubramanian
Niranjan Balasubramanian
127
69
0
02 May 2020
Rethinking Class-Discrimination Based CNN Channel Pruning
Rethinking Class-Discrimination Based CNN Channel Pruning
Yuchen Liu
D. Wentzlaff
S. Kung
78
10
0
29 Apr 2020
WoodFisher: Efficient Second-Order Approximation for Neural Network
  Compression
WoodFisher: Efficient Second-Order Approximation for Neural Network Compression
Sidak Pal Singh
Dan Alistarh
94
28
0
29 Apr 2020
PERMDNN: Efficient Compressed DNN Architecture with Permuted Diagonal
  Matrices
PERMDNN: Efficient Compressed DNN Architecture with Permuted Diagonal MatricesMicro (MICRO), 2023
Chunhua Deng
Siyu Liao
Yi Xie
Keshab K. Parhi
Xuehai Qian
Bo Yuan
109
93
0
23 Apr 2020
A Unified DNN Weight Compression Framework Using Reweighted Optimization
  Methods
A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods
Tianyun Zhang
Xiaolong Ma
Zheng Zhan
Shangli Zhou
Minghai Qin
Fei Sun
Yen-kuang Chen
Caiwen Ding
M. Fardad
Yanzhi Wang
69
5
0
12 Apr 2020
Acceleration of Convolutional Neural Network Using FFT-Based Split
  Convolutions
Acceleration of Convolutional Neural Network Using FFT-Based Split Convolutions
Kamran Chitsaz
M. Hajabdollahi
N. Karimi
S. Samavi
S. Shirani
84
29
0
27 Mar 2020
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Bai Li
Shiqi Wang
Yunhan Jia
Yantao Lu
Zhenyu Zhong
Lawrence Carin
Suman Jana
AAML
165
14
0
06 Mar 2020
HYDRA: Pruning Adversarially Robust Neural Networks
HYDRA: Pruning Adversarially Robust Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
100
25
0
24 Feb 2020
Gradual Channel Pruning while Training using Feature Relevance Scores
  for Convolutional Neural Networks
Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural Networks
Sai Aparna Aketi
Sourjya Roy
A. Raghunathan
Kaushik Roy
148
22
0
23 Feb 2020
Network Pruning via Annealing and Direct Sparsity Control
Network Pruning via Annealing and Direct Sparsity Control
Yangzi Guo
Yiyuan She
Adrian Barbu
43
0
0
11 Feb 2020
Convolutional Neural Network Pruning Using Filter Attenuation
Convolutional Neural Network Pruning Using Filter AttenuationInternational Conference on Information Photonics (ICIP), 2025
Morteza Mousa Pasandi
M. Hajabdollahi
N. Karimi
S. Samavi
S. Shirani
3DPC
45
3
0
09 Feb 2020
Soft Threshold Weight Reparameterization for Learnable Sparsity
Soft Threshold Weight Reparameterization for Learnable SparsityInternational Conference on Machine Learning (ICML), 2024
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
412
256
0
08 Feb 2020
MSE-Optimal Neural Network Initialization via Layer Fusion
MSE-Optimal Neural Network Initialization via Layer Fusion
Ramina Ghods
Andrew Lan
Tom Goldstein
Christoph Studer
FedML
45
1
0
28 Jan 2020
Previous
12345678
Next