ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.05331
  4. Cited By
Dynamic Channel Pruning: Feature Boosting and Suppression

Dynamic Channel Pruning: Feature Boosting and Suppression

12 October 2018
Xitong Gao
Yiren Zhao
L. Dudziak
Robert D. Mullins
Chengzhong Xu
ArXivPDFHTML

Papers citing "Dynamic Channel Pruning: Feature Boosting and Suppression"

22 / 122 papers shown
Title
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
Y. Fu
Haoran You
Yang Katie Zhao
Yue Wang
Chaojian Li
K. Gopalakrishnan
Zhangyang Wang
Yingyan Lin
MQ
32
32
0
24 Dec 2020
DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep
  neural networks
DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep neural networks
Abhishek Singh
Ayush Chopra
Vivek Sharma
Ethan Garza
Emily Zhang
Praneeth Vepakomma
Ramesh Raskar
17
45
0
20 Dec 2020
Bringing AI To Edge: From Deep Learning's Perspective
Bringing AI To Edge: From Deep Learning's Perspective
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
52
116
0
25 Nov 2020
MetaGater: Fast Learning of Conditional Channel Gated Networks via
  Federated Meta-Learning
MetaGater: Fast Learning of Conditional Channel Gated Networks via Federated Meta-Learning
Sen Lin
Li Yang
Zhezhi He
Deliang Fan
Junshan Zhang
FedML
AI4CE
17
5
0
25 Nov 2020
Third ArchEdge Workshop: Exploring the Design Space of Efficient Deep
  Neural Networks
Third ArchEdge Workshop: Exploring the Design Space of Efficient Deep Neural Networks
Fuxun Yu
Dimitrios Stamoulis
Di Wang
Dimitrios Lymberopoulos
Xiang Chen
3DV
22
1
0
22 Nov 2020
Automated Model Compression by Jointly Applied Pruning and Quantization
Automated Model Compression by Jointly Applied Pruning and Quantization
Wenting Tang
Xingxing Wei
Bo-wen Li
MQ
8
7
0
12 Nov 2020
Effective Model Compression via Stage-wise Pruning
Effective Model Compression via Stage-wise Pruning
Mingyang Zhang
Xinyi Yu
Jingtao Rong
L. Ou
SyDa
21
1
0
10 Nov 2020
Stable Low-rank Tensor Decomposition for Compression of Convolutional
  Neural Network
Stable Low-rank Tensor Decomposition for Compression of Convolutional Neural Network
Anh-Huy Phan
Konstantin Sobolev
Konstantin Sozykin
Dmitry Ermilov
Julia Gusak
P. Tichavský
Valeriy Glukhov
Ivan V. Oseledets
A. Cichocki
BDL
21
128
0
12 Aug 2020
Dynamic Group Convolution for Accelerating Convolutional Neural Networks
Dynamic Group Convolution for Accelerating Convolutional Neural Networks
Z. Su
Linpu Fang
Wenxiong Kang
D. Hu
M. Pietikäinen
Li Liu
13
44
0
08 Jul 2020
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge
  Applications: A Survey
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey
Jiayi Liu
S. Tripathi
Unmesh Kurup
Mohak Shah
3DPC
MedIm
22
52
0
08 May 2020
FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN
  Model Training
FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN Model Training
Sangkug Lym
M. Erez
13
25
0
27 Apr 2020
Computation on Sparse Neural Networks: an Inspiration for Future
  Hardware
Computation on Sparse Neural Networks: an Inspiration for Future Hardware
Fei Sun
Minghai Qin
Tianyun Zhang
Liu Liu
Yen-kuang Chen
Yuan Xie
29
7
0
24 Apr 2020
Resource-Efficient Neural Networks for Embedded Systems
Resource-Efficient Neural Networks for Embedded Systems
Wolfgang Roth
Günther Schindler
Lukas Pfeifenberger
Robert Peharz
Sebastian Tschiatschek
Holger Fröning
Franz Pernkopf
Zoubin Ghahramani
28
47
0
07 Jan 2020
S2DNAS:Transforming Static CNN Model for Dynamic Inference via Neural
  Architecture Search
S2DNAS:Transforming Static CNN Model for Dynamic Inference via Neural Architecture Search
Zhihang Yuan
Bingzhe Wu
Zheng Liang
Shiwan Zhao
Weichen Bi
Guangyu Sun
25
30
0
16 Nov 2019
DASNet: Dynamic Activation Sparsity for Neural Network Efficiency
  Improvement
DASNet: Dynamic Activation Sparsity for Neural Network Efficiency Improvement
Qing Yang
Jiachen Mao
Zuoguan Wang
H. Li
13
15
0
13 Sep 2019
Effective Training of Convolutional Neural Networks with Low-bitwidth
  Weights and Activations
Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations
Bohan Zhuang
Jing Liu
Mingkui Tan
Lingqiao Liu
Ian Reid
Chunhua Shen
MQ
26
44
0
10 Aug 2019
Bringing Giant Neural Networks Down to Earth with Unlabeled Data
Bringing Giant Neural Networks Down to Earth with Unlabeled Data
Yehui Tang
Shan You
Chang Xu
Boxin Shi
Chao Xu
16
11
0
13 Jul 2019
Butterfly Transform: An Efficient FFT Based Neural Architecture Design
Butterfly Transform: An Efficient FFT Based Neural Architecture Design
Keivan Alizadeh-Vahid
Anish K. Prabhu
Ali Farhadi
Mohammad Rastegari
32
50
0
05 Jun 2019
Interpretable Neural Network Decoupling
Interpretable Neural Network Decoupling
Yuchao Li
Rongrong Ji
Shaohui Lin
Baochang Zhang
Chenqian Yan
Yongjian Wu
Feiyue Huang
Ling Shao
34
2
0
04 Jun 2019
Focused Quantization for Sparse CNNs
Focused Quantization for Sparse CNNs
Yiren Zhao
Xitong Gao
Daniel Bates
Robert D. Mullins
Chengzhong Xu
MQ
15
26
0
07 Mar 2019
Channel Gating Neural Networks
Channel Gating Neural Networks
Weizhe Hua
Yuan Zhou
Christopher De Sa
Zhiru Zhang
G. E. Suh
15
180
0
29 May 2018
Faster Neural Network Training with Approximate Tensor Operations
Faster Neural Network Training with Approximate Tensor Operations
Menachem Adelman
Kfir Y. Levy
Ido Hakimi
M. Silberstein
29
26
0
21 May 2018
Previous
123