ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07565
  4. Cited By
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain
  Surgeon
v1v2 (latest)

Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon

22 May 2017
Xin Luna Dong
Shangyu Chen
Sinno Jialin Pan
ArXiv (abs)PDFHTML

Papers citing "Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon"

50 / 275 papers shown
Title
CR-SFP: Learning Consistent Representation for Soft Filter Pruning
CR-SFP: Learning Consistent Representation for Soft Filter Pruning
Jingyang Xiang
Zhuangzhi Chen
Jianbiao Mei
Siqi Li
Jun Chen
Yong-Jin Liu
97
0
0
17 Dec 2023
Optimizing Dense Feed-Forward Neural Networks
Optimizing Dense Feed-Forward Neural Networks
Luis Balderas
Miguel Lastra
José M. Benítez
115
8
0
16 Dec 2023
SlimSAM: 0.1% Data Makes Segment Anything Slim
SlimSAM: 0.1% Data Makes Segment Anything Slim
Zigeng Chen
Gongfan Fang
Xinyin Ma
Xinchao Wang
185
17
0
08 Dec 2023
F3-Pruning: A Training-Free and Generalized Pruning Strategy towards
  Faster and Finer Text-to-Video Synthesis
F3-Pruning: A Training-Free and Generalized Pruning Strategy towards Faster and Finer Text-to-Video Synthesis
Jingkuan Song
Jianzhi Liu
Lianli Gao
Jingkuan Song
DiffMVGen
89
8
0
06 Dec 2023
Visual Prompting Upgrades Neural Network Sparsification: A Data-Model
  Perspective
Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective
Can Jin
Tianjin Huang
Yihua Zhang
Mykola Pechenizkiy
Sijia Liu
Shiwei Liu
Tianlong Chen
VLM
216
29
0
03 Dec 2023
FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor
  Cores
FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
Daniel Y. Fu
Hermann Kumbong
Eric N. D. Nguyen
Christopher Ré
VLM
152
35
0
10 Nov 2023
PriPrune: Quantifying and Preserving Privacy in Pruned Federated
  Learning
PriPrune: Quantifying and Preserving Privacy in Pruned Federated Learning
Tianyue Chu
Mengwei Yang
Nikolaos Laoutaris
A. Markopoulou
106
8
0
30 Oct 2023
LoRAShear: Efficient Large Language Model Structured Pruning and
  Knowledge Recovery
LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery
Tianyi Chen
Tianyu Ding
Badal Yadav
Ilya Zharkov
Luming Liang
149
34
0
24 Oct 2023
One is More: Diverse Perspectives within a Single Network for Efficient
  DRL
One is More: Diverse Perspectives within a Single Network for Efficient DRL
Yiqin Tan
Ling Pan
Longbo Huang
OffRL
163
0
0
21 Oct 2023
Breaking through Deterministic Barriers: Randomized Pruning Mask
  Generation and Selection
Breaking through Deterministic Barriers: Randomized Pruning Mask Generation and Selection
Jianwei Li
Weizhi Gao
Qi Lei
Dongkuan Xu
114
3
0
19 Oct 2023
The Cost of Down-Scaling Language Models: Fact Recall Deteriorates
  before In-Context Learning
The Cost of Down-Scaling Language Models: Fact Recall Deteriorates before In-Context Learning
Tian Jin
Nolan Clement
Xin Dong
Vaishnavh Nagarajan
Michael Carbin
Jonathan Ragan-Kelley
Gintare Karolina Dziugaite
LRM
127
5
0
07 Oct 2023
Feather: An Elegant Solution to Effective DNN Sparsification
Feather: An Elegant Solution to Effective DNN Sparsification
Athanasios Glentis Georgoulakis
George Retsinas
Petros Maragos
116
2
0
03 Oct 2023
Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs
Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs
Lu Yin
Ajay Jaiswal
Shiwei Liu
Souvik Kundu
Zhangyang Wang
169
12
0
29 Sep 2023
LAPP: Layer Adaptive Progressive Pruning for Compressing CNNs from
  Scratch
LAPP: Layer Adaptive Progressive Pruning for Compressing CNNs from Scratch
P. Zhai
K. Guo
Fan Liu
Xiaofen Xing
Xiangmin Xu
105
3
0
25 Sep 2023
EPTQ: Enhanced Post-Training Quantization via Label-Free Hessian
EPTQ: Enhanced Post-Training Quantization via Label-Free Hessian
Ofir Gordon
H. Habi
Arnon Netzer
MQ
112
2
0
20 Sep 2023
Accelerating Deep Neural Networks via Semi-Structured Activation
  Sparsity
Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity
Matteo Grimaldi
Darshan C. Ganji
Ivan Lazarevich
Sudhakar Sah
112
11
0
12 Sep 2023
QuantEase: Optimization-based Quantization for Language Models
QuantEase: Optimization-based Quantization for Language Models
Kayhan Behdin
Ayan Acharya
Aman Gupta
Qingquan Song
Siyu Zhu
S. Keerthi
Rahul Mazumder
MQ
160
23
0
05 Sep 2023
Estimation and Hypothesis Testing of Derivatives in Smoothing Spline
  ANOVA Models
Estimation and Hypothesis Testing of Derivatives in Smoothing Spline ANOVA Models
Ruiqi Liu
Kexuan Li
Meng Li
91
3
0
26 Aug 2023
Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep
  Neural Networks
Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks
Kaixin Xu
Zhe Wang
Xue Geng
Jie Lin
Ruibing Jin
Xiaoli Li
Weisi Lin
85
16
0
21 Aug 2023
Influence Function Based Second-Order Channel Pruning-Evaluating True
  Loss Changes For Pruning Is Possible Without Retraining
Influence Function Based Second-Order Channel Pruning-Evaluating True Loss Changes For Pruning Is Possible Without Retraining
Hongrong Cheng
Miao Zhang
Javen Qinfeng Shi
AAML
88
4
0
13 Aug 2023
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Denis Kuznedelev
Eldar Kurtic
Eugenia Iofinova
Elias Frantar
Alexandra Peste
Dan Alistarh
VLM
144
13
0
03 Aug 2023
MIMONet: Multi-Input Multi-Output On-Device Deep Learning
MIMONet: Multi-Input Multi-Output On-Device Deep Learning
Zexin Li
Xiaoxi He
Yufei Li
Shahab Nikkhoo
Wei Yang
Lothar Thiele
Cong Liu
116
6
0
22 Jul 2023
Filter Pruning for Efficient CNNs via Knowledge-driven Differential
  Filter Sampler
Filter Pruning for Efficient CNNs via Knowledge-driven Differential Filter Sampler
Shaohui Lin
Wenxuan Huang
Jiao Xie
Baochang Zhang
Chunjiang Ge
Zhou Yu
Jungong Han
David Doermann
105
2
0
01 Jul 2023
Magnificent Minified Models
Magnificent Minified Models
Richard E. Harang
Hillary Sanders
39
0
0
16 Jun 2023
The Emergence of Essential Sparsity in Large Pre-trained Models: The
  Weights that Matter
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
Ajay Jaiswal
Shiwei Liu
Tianlong Chen
Zhangyang Wang
VLM
148
39
0
06 Jun 2023
Diffused Redundancy in Pre-trained Representations
Diffused Redundancy in Pre-trained Representations
Vedant Nanda
Till Speicher
John P. Dickerson
Soheil Feizi
Krishna P. Gummadi
Adrian Weller
SSL
96
4
0
31 May 2023
Sparse Weight Averaging with Multiple Particles for Iterative Magnitude
  Pruning
Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning
Moonseok Choi
Hyungi Lee
G. Nam
Juho Lee
140
3
0
24 May 2023
Layer-adaptive Structured Pruning Guided by Latency
Layer-adaptive Structured Pruning Guided by Latency
Siyuan Pan
Linna Zhang
Jie Zhang
Xiaoshuang Li
Liang Hou
Xiaobing Tu
95
1
0
23 May 2023
Structural Pruning for Diffusion Models
Structural Pruning for Diffusion Models
Gongfan Fang
Xinyin Ma
Xinchao Wang
196
164
0
18 May 2023
Sparsified Model Zoo Twins: Investigating Populations of Sparsified
  Neural Network Models
Sparsified Model Zoo Twins: Investigating Populations of Sparsified Neural Network Models
D. Honegger
Konstantin Schurholt
Damian Borth
132
5
0
26 Apr 2023
iPINNs: Incremental learning for Physics-informed neural networks
iPINNs: Incremental learning for Physics-informed neural networks
Aleksandr Dekhovich
M. Sluiter
David Tax
Miguel A. Bessa
AI4CEDiffM
145
14
0
10 Apr 2023
Learning to Learn with Indispensable Connections
Learning to Learn with Indispensable Connections
Sambhavi Tiwari
Manas Gogoi
Shekhar Verma
Krishna Pratap Singh
CLL
107
1
0
06 Apr 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
135
26
0
06 Apr 2023
SEENN: Towards Temporal Spiking Early-Exit Neural Networks
SEENN: Towards Temporal Spiking Early-Exit Neural Networks
Yuhang Li
Tamar Geller
Youngeun Kim
Priyadarshini Panda
162
49
0
02 Apr 2023
Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware
  Compression
Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression
Denis Kuznedelev
Soroush Tabesh
Kimia Noorbakhsh
Elias Frantar
Sara Beery
Eldar Kurtic
Dan Alistarh
MQVLM
102
2
0
25 Mar 2023
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Shiwei Liu
Tianlong Chen
Zhenyu Zhang
Xuxi Chen
Tianjin Huang
Ajay Jaiswal
Zhangyang Wang
121
30
0
03 Mar 2023
Balanced Training for Sparse GANs
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
133
10
0
28 Feb 2023
Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
Riade Benbaki
Wenyu Chen
X. Meng
Hussein Hazimeh
Natalia Ponomareva
Zhe Zhao
Rahul Mazumder
172
33
0
28 Feb 2023
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Benjamin Vandersmissen
José Oramas
116
1
0
22 Feb 2023
Simple Hardware-Efficient Long Convolutions for Sequence Modeling
Simple Hardware-Efficient Long Convolutions for Sequence Modeling
Daniel Y. Fu
Elliot L. Epstein
Eric N. D. Nguyen
A. Thomas
Michael Zhang
Tri Dao
Atri Rudra
Christopher Ré
114
61
0
13 Feb 2023
Autoselection of the Ensemble of Convolutional Neural Networks with
  Second-Order Cone Programming
Autoselection of the Ensemble of Convolutional Neural Networks with Second-Order Cone Programming
Buse Çisil Güldoğuş
Abdullah Nazhat Abdullah
Muhammad Ammar Ali
Süreyya Özögür-Akyüz
99
0
0
12 Feb 2023
Utility-based Perturbed Gradient Descent: An Optimizer for Continual
  Learning
Utility-based Perturbed Gradient Descent: An Optimizer for Continual Learning
Mohamed Elsayed
A. R. Mahmood
CLL
125
7
0
07 Feb 2023
DepGraph: Towards Any Structural Pruning
DepGraph: Towards Any Structural Pruning
Gongfan Fang
Xinyin Ma
Mingli Song
Michael Bi Mi
Xinchao Wang
GNN
271
335
0
30 Jan 2023
Low-Rank Winograd Transformation for 3D Convolutional Neural Networks
Low-Rank Winograd Transformation for 3D Convolutional Neural Networks
Ziran Qin
Mingbao Lin
Weiyao Lin
3DPC
121
3
0
26 Jan 2023
Getting Away with More Network Pruning: From Sparsity to Geometry and
  Linear Regions
Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions
Junyang Cai
Khai-Nguyen Nguyen
Nishant Shrestha
Aidan Good
Ruisen Tu
Xin Yu
Shandian Zhe
Thiago Serra
MLT
146
10
0
19 Jan 2023
FlatENN: Train Flat for Enhanced Fault Tolerance of Quantized Deep
  Neural Networks
FlatENN: Train Flat for Enhanced Fault Tolerance of Quantized Deep Neural Networks
Akul Malhotra
S. Gupta
47
0
0
29 Dec 2022
Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning
Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning
Danyang Liu
Xue Liu
84
0
0
24 Dec 2022
Fairify: Fairness Verification of Neural Networks
Fairify: Fairness Verification of Neural Networks
Sumon Biswas
Hridesh Rajan
141
30
0
08 Dec 2022
The Effect of Data Dimensionality on Neural Network Prunability
The Effect of Data Dimensionality on Neural Network Prunability
Zachary Ankner
Alex Renda
Gintare Karolina Dziugaite
Jonathan Frankle
Tian Jin
112
5
0
01 Dec 2022
Partial Binarization of Neural Networks for Budget-Aware Efficient
  Learning
Partial Binarization of Neural Networks for Budget-Aware Efficient Learning
Udbhav Bamba
Neeraj Anand
Saksham Aggarwal
Dilip K Prasad
D. K. Gupta
MQ
145
0
0
12 Nov 2022
Previous
123456
Next