ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.08710
  4. Cited By
Pruning Filters for Efficient ConvNets

Pruning Filters for Efficient ConvNets

31 August 2016
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
    3DPC
ArXivPDFHTML

Papers citing "Pruning Filters for Efficient ConvNets"

50 / 430 papers shown
Title
3D Scene Compression through Entropy Penalized Neural Representation
  Functions
3D Scene Compression through Entropy Penalized Neural Representation Functions
Thomas Bird
Johannes Ballé
Saurabh Singh
P. Chou
33
30
0
26 Apr 2021
Carrying out CNN Channel Pruning in a White Box
Carrying out CNN Channel Pruning in a White Box
Yu-xin Zhang
Mingbao Lin
Chia-Wen Lin
Jie Chen
Feiyue Huang
Yongjian Wu
Yonghong Tian
R. Ji
VLM
31
58
0
24 Apr 2021
Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of
  Quantization on Depthwise Separable Convolutional Networks Through the Eyes
  of Multi-scale Distributional Dynamics
Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of Quantization on Depthwise Separable Convolutional Networks Through the Eyes of Multi-scale Distributional Dynamics
S. Yun
Alexander Wong
MQ
19
25
0
24 Apr 2021
Unsupervised Information Obfuscation for Split Inference of Neural
  Networks
Unsupervised Information Obfuscation for Split Inference of Neural Networks
Mohammad Samragh
H. Hosseini
Aleksei Triastcyn
K. Azarian
Joseph B. Soriaga
F. Koushanfar
20
11
0
23 Apr 2021
Improving the Accuracy of Early Exits in Multi-Exit Architectures via
  Curriculum Learning
Improving the Accuracy of Early Exits in Multi-Exit Architectures via Curriculum Learning
Arian Bakhtiarnia
Qi Zhang
Alexandros Iosifidis
28
12
0
21 Apr 2021
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
147
420
0
19 Apr 2021
Lottery Jackpots Exist in Pre-trained Models
Lottery Jackpots Exist in Pre-trained Models
Yu-xin Zhang
Mingbao Lin
Yan Wang
Fei Chao
Rongrong Ji
30
15
0
18 Apr 2021
CondenseNet V2: Sparse Feature Reactivation for Deep Networks
CondenseNet V2: Sparse Feature Reactivation for Deep Networks
Le Yang
Haojun Jiang
Ruojin Cai
Yulin Wang
Shiji Song
Gao Huang
Qi Tian
DD
17
64
0
09 Apr 2021
Content-Aware GAN Compression
Content-Aware GAN Compression
Yuchen Liu
Zhixin Shu
Yijun Li
Zhe-nan Lin
Federico Perazzi
S. Kung
GAN
24
58
0
06 Apr 2021
Compacting Deep Neural Networks for Internet of Things: Methods and
  Applications
Compacting Deep Neural Networks for Internet of Things: Methods and Applications
Ke Zhang
Hanbo Ying
Hongning Dai
Lin Li
Yuangyuang Peng
Keyi Guo
Hongfang Yu
16
38
0
20 Mar 2021
Toward Compact Deep Neural Networks via Energy-Aware Pruning
Toward Compact Deep Neural Networks via Energy-Aware Pruning
Seul-Ki Yeom
Kyung-Hwan Shim
Jee-Hyun Hwang
CVBM
22
12
0
19 Mar 2021
Scalable Vision Transformers with Hierarchical Pooling
Scalable Vision Transformers with Hierarchical Pooling
Zizheng Pan
Bohan Zhuang
Jing Liu
Haoyu He
Jianfei Cai
ViT
25
126
0
19 Mar 2021
Recent Advances on Neural Network Pruning at Initialization
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
31
64
0
11 Mar 2021
Quantization-Guided Training for Compact TinyML Models
Quantization-Guided Training for Compact TinyML Models
Sedigh Ghamari
Koray Ozcan
Thu Dinh
A. Melnikov
Juan Carvajal
Jan Ernst
S. Chai
MQ
16
16
0
10 Mar 2021
Knowledge Evolution in Neural Networks
Knowledge Evolution in Neural Networks
Ahmed Taha
Abhinav Shrivastava
L. Davis
42
21
0
09 Mar 2021
Split Computing and Early Exiting for Deep Learning Applications: Survey
  and Research Challenges
Split Computing and Early Exiting for Deep Learning Applications: Survey and Research Challenges
Yoshitomo Matsubara
Marco Levorato
Francesco Restuccia
22
199
0
08 Mar 2021
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
  Accuracy
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
Lucas Liebenwein
Cenk Baykal
Brandon Carter
David K Gifford
Daniela Rus
AAML
27
71
0
04 Mar 2021
FjORD: Fair and Accurate Federated Learning under heterogeneous targets
  with Ordered Dropout
FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
Samuel Horváth
Stefanos Laskaridis
Mario Almeida
Ilias Leondiadis
Stylianos I. Venieris
Nicholas D. Lane
176
267
0
26 Feb 2021
Reduced-Order Neural Network Synthesis with Robustness Guarantees
Reduced-Order Neural Network Synthesis with Robustness Guarantees
R. Drummond
M. Turner
S. Duncan
13
9
0
18 Feb 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
44
110
0
16 Feb 2021
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Aojun Zhou
Yukun Ma
Junnan Zhu
Jianbo Liu
Zhijie Zhang
Kun Yuan
Wenxiu Sun
Hongsheng Li
38
239
0
08 Feb 2021
AACP: Model Compression by Accurate and Automatic Channel Pruning
AACP: Model Compression by Accurate and Automatic Channel Pruning
Lanbo Lin
Yujiu Yang
Zhenhua Guo
MQ
22
12
0
31 Jan 2021
Baseline Pruning-Based Approach to Trojan Detection in Neural Networks
Baseline Pruning-Based Approach to Trojan Detection in Neural Networks
P. Bajcsy
Michael Majurski
AAML
25
8
0
22 Jan 2021
GhostSR: Learning Ghost Features for Efficient Image Super-Resolution
GhostSR: Learning Ghost Features for Efficient Image Super-Resolution
Ying Nie
Kai Han
Zhenhua Liu
Chunjing Xu
Yunhe Wang
OOD
35
22
0
21 Jan 2021
RepVGG: Making VGG-style ConvNets Great Again
RepVGG: Making VGG-style ConvNets Great Again
Xiaohan Ding
X. Zhang
Ningning Ma
Jungong Han
Guiguang Ding
Jian-jun Sun
125
1,546
0
11 Jan 2021
Robustness and Transferability of Universal Attacks on Compressed Models
Robustness and Transferability of Universal Attacks on Compressed Models
Alberto G. Matachana
Kenneth T. Co
Luis Muñoz-González
David Martínez
Emil C. Lupu
AAML
11
10
0
10 Dec 2020
DiffPrune: Neural Network Pruning with Deterministic Approximate Binary
  Gates and $L_0$ Regularization
DiffPrune: Neural Network Pruning with Deterministic Approximate Binary Gates and L0L_0L0​ Regularization
Yaniv Shulman
38
3
0
07 Dec 2020
Cross-Layer Distillation with Semantic Calibration
Cross-Layer Distillation with Semantic Calibration
Defang Chen
Jian-Ping Mei
Yuan Zhang
Can Wang
Yan Feng
Chun-Yen Chen
FedML
34
286
0
06 Dec 2020
Layer Pruning via Fusible Residual Convolutional Block for Deep Neural
  Networks
Layer Pruning via Fusible Residual Convolutional Block for Deep Neural Networks
Pengtao Xu
Jian Cao
Fanhua Shang
Wenyu Sun
Pu Li
3DPC
12
24
0
29 Nov 2020
ProtoPShare: Prototype Sharing for Interpretable Image Classification
  and Similarity Discovery
ProtoPShare: Prototype Sharing for Interpretable Image Classification and Similarity Discovery
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
12
111
0
29 Nov 2020
Bringing AI To Edge: From Deep Learning's Perspective
Bringing AI To Edge: From Deep Learning's Perspective
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
42
116
0
25 Nov 2020
Auto Graph Encoder-Decoder for Neural Network Pruning
Auto Graph Encoder-Decoder for Neural Network Pruning
Sixing Yu
Arya Mazaheri
Ali Jannesari
GNN
11
38
0
25 Nov 2020
Rethinking Weight Decay For Efficient Neural Network Pruning
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
23
25
0
20 Nov 2020
Neural Network Compression Via Sparse Optimization
Neural Network Compression Via Sparse Optimization
Tianyi Chen
Bo Ji
Yixin Shi
Tianyu Ding
Biyi Fang
Sheng Yi
Xiao Tu
22
15
0
10 Nov 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
19
5
0
26 Oct 2020
SCOP: Scientific Control for Reliable Neural Network Pruning
SCOP: Scientific Control for Reliable Neural Network Pruning
Yehui Tang
Yunhe Wang
Yixing Xu
Dacheng Tao
Chunjing Xu
Chao Xu
Chang Xu
AAML
39
166
0
21 Oct 2020
Training Binary Neural Networks through Learning with Noisy Supervision
Training Binary Neural Networks through Learning with Noisy Supervision
Kai Han
Yunhe Wang
Yixing Xu
Chunjing Xu
Enhua Wu
Chang Xu
MQ
8
55
0
10 Oct 2020
Are Neural Nets Modular? Inspecting Functional Modularity Through
  Differentiable Weight Masks
Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks
Róbert Csordás
Sjoerd van Steenkiste
Jürgen Schmidhuber
21
87
0
05 Oct 2020
Joint Pruning & Quantization for Extremely Sparse Neural Networks
Joint Pruning & Quantization for Extremely Sparse Neural Networks
Po-Hsiang Yu
Sih-Sian Wu
Jan P. Klopp
Liang-Gee Chen
Shao-Yi Chien
MQ
12
14
0
05 Oct 2020
Self-grouping Convolutional Neural Networks
Self-grouping Convolutional Neural Networks
Qingbei Guo
Xiaojun Wu
J. Kittler
Zhiquan Feng
17
22
0
29 Sep 2020
Pruning Convolutional Filters using Batch Bridgeout
Pruning Convolutional Filters using Batch Bridgeout
Najeeb Khan
Ian Stavness
16
3
0
23 Sep 2020
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Jingtong Su
Yihang Chen
Tianle Cai
Tianhao Wu
Ruiqi Gao
Liwei Wang
J. Lee
6
85
0
22 Sep 2020
PP-OCR: A Practical Ultra Lightweight OCR System
PP-OCR: A Practical Ultra Lightweight OCR System
Yuning Du
Chenxia Li
Ruoyu Guo
Xiaoting Yin
Weiwei Liu
...
Yifan Bai
Zilin Yu
Yehua Yang
Qingqing Dang
Hongya Wang
27
177
0
21 Sep 2020
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet
  without Tricks
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks
Zhiqiang Shen
Marios Savvides
15
63
0
17 Sep 2020
Transform Quantization for CNN (Convolutional Neural Network)
  Compression
Transform Quantization for CNN (Convolutional Neural Network) Compression
Sean I. Young
Wang Zhe
David S. Taubman
B. Girod
MQ
22
69
0
02 Sep 2020
Training Sparse Neural Networks using Compressed Sensing
Training Sparse Neural Networks using Compressed Sensing
Jonathan W. Siegel
Jianhong Chen
Pengchuan Zhang
Jinchao Xu
21
5
0
21 Aug 2020
T-Basis: a Compact Representation for Neural Networks
T-Basis: a Compact Representation for Neural Networks
Anton Obukhov
M. Rakhuba
Stamatios Georgoulis
Menelaos Kanakis
Dengxin Dai
Luc Van Gool
29
27
0
13 Jul 2020
Operation-Aware Soft Channel Pruning using Differentiable Masks
Operation-Aware Soft Channel Pruning using Differentiable Masks
Minsoo Kang
Bohyung Han
AAML
20
138
0
08 Jul 2020
Enabling On-Device CNN Training by Self-Supervised Instance Filtering
  and Error Map Pruning
Enabling On-Device CNN Training by Self-Supervised Instance Filtering and Error Map Pruning
Yawen Wu
Zhepeng Wang
Yiyu Shi
J. Hu
16
43
0
07 Jul 2020
Self-Supervised GAN Compression
Self-Supervised GAN Compression
Chong Yu
Jeff Pool
7
9
0
03 Jul 2020
Previous
123456789
Next