ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.01213
  4. Cited By
Data-Driven Sparse Structure Selection for Deep Neural Networks
v1v2v3 (latest)

Data-Driven Sparse Structure Selection for Deep Neural Networks

5 July 2017
Zehao Huang
Naiyan Wang
ArXiv (abs)PDFHTML

Papers citing "Data-Driven Sparse Structure Selection for Deep Neural Networks"

50 / 277 papers shown
Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio
Network Adjustment: Channel Search Guided by FLOPs Utilization RatioComputer Vision and Pattern Recognition (CVPR), 2020
Zhengsu Chen
J. Niu
Lingxi Xie
Xuefeng Liu
Longhui Wei
Qi Tian
150
14
0
06 Apr 2020
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity
  Allocation
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity AllocationEuropean Conference on Computer Vision (ECCV), 2020
Xuefei Ning
Tianchen Zhao
Wenshuo Li
Peng Lei
Yu Wang
Huazhong Yang
245
115
0
05 Apr 2020
Review of data analysis in vision inspection of power lines with an
  in-depth discussion of deep learning technology
Review of data analysis in vision inspection of power lines with an in-depth discussion of deep learning technology
Xinyu Liu
Xiren Miao
Hao Jiang
Jia Chen
126
14
0
22 Mar 2020
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for
  Network Compression
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network CompressionComputer Vision and Pattern Recognition (CVPR), 2020
Yawei Li
Shuhang Gu
Christoph Mayer
Luc Van Gool
Radu Timofte
357
206
0
19 Mar 2020
MINT: Deep Network Compression via Mutual Information-based Neuron
  Trimming
MINT: Deep Network Compression via Mutual Information-based Neuron TrimmingInternational Conference on Pattern Recognition (ICPR), 2020
Madan Ravi Ganesh
Jason J. Corso
Salimeh Yasaei Sekeh
MQ
278
17
0
18 Mar 2020
SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks
  by Weights Flipping
SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Weights FlippingIEEE Transactions on Image Processing (TIP), 2020
Jiaxiong Qiu
Cai Chen
Shuaicheng Liu
B. Zeng
255
51
0
16 Mar 2020
SASL: Saliency-Adaptive Sparsity Learning for Neural Network
  Acceleration
SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration
Jun Shi
Jianfeng Xu
K. Tasaka
Zhibo Chen
178
26
0
12 Mar 2020
Channel Pruning via Optimal Thresholding
Channel Pruning via Optimal ThresholdingInternational Conference on Neural Information Processing (ICONIP), 2020
Yun Ye
Ganmei You
Jong-Kae Fwu
Xia Zhu
Q. Yang
Yuan Zhu
219
16
0
10 Mar 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?Conference on Machine Learning and Systems (MLSys), 2020
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
625
1,188
0
06 Mar 2020
Anytime Inference with Distilled Hierarchical Neural Ensembles
Anytime Inference with Distilled Hierarchical Neural Ensembles
Adria Ruiz
Jakob Verbeek
UQCVBDLFedML
209
7
0
03 Mar 2020
Iterative Averaging in the Quest for Best Test Error
Iterative Averaging in the Quest for Best Test ErrorJournal of machine learning research (JMLR), 2020
Diego Granziol
Xingchen Wan
Samuel Albanie
Stephen J. Roberts
289
3
0
02 Mar 2020
Deep Learning for Biomedical Image Reconstruction: A Survey
Deep Learning for Biomedical Image Reconstruction: A SurveyArtificial Intelligence Review (AI Review), 2020
Hanene Ben Yedder
Ben Cardoen
Ghassan Hamarneh
MedIm3DV
209
109
0
26 Feb 2020
HRank: Filter Pruning using High-Rank Feature Map
HRank: Filter Pruning using High-Rank Feature MapComputer Vision and Pattern Recognition (CVPR), 2020
Mingbao Lin
Rongrong Ji
Yan Wang
Yichen Zhang
Baochang Zhang
Yonghong Tian
Ling Shao
291
842
0
24 Feb 2020
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by
  Enabling Input-Adaptive Inference
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive InferenceInternational Conference on Learning Representations (ICLR), 2020
Ting-Kuei Hu
Tianlong Chen
Haotao Wang
Zinan Lin
OODAAML3DH
341
88
0
24 Feb 2020
Knapsack Pruning with Inner Distillation
Knapsack Pruning with Inner Distillation
Y. Aflalo
Asaf Noy
Ming Lin
Itamar Friedman
Lihi Zelnik-Manor
3DPC
187
36
0
19 Feb 2020
Soft Threshold Weight Reparameterization for Learnable Sparsity
Soft Threshold Weight Reparameterization for Learnable SparsityInternational Conference on Machine Learning (ICML), 2020
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
632
263
0
08 Feb 2020
Search for Better Students to Learn Distilled Knowledge
Search for Better Students to Learn Distilled KnowledgeEuropean Conference on Artificial Intelligence (ECAI), 2020
Jindong Gu
Volker Tresp
128
21
0
30 Jan 2020
Channel Pruning via Automatic Structure Search
Channel Pruning via Automatic Structure SearchInternational Joint Conference on Artificial Intelligence (IJCAI), 2020
Mingbao Lin
Rongrong Ji
Yuxin Zhang
Baochang Zhang
Yongjian Wu
Yonghong Tian
232
270
0
23 Jan 2020
Filter Sketch for Network Pruning
Filter Sketch for Network PruningIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020
Mingbao Lin
Liujuan Cao
Shaojie Li
QiXiang Ye
Yonghong Tian
Jianzhuang Liu
Q. Tian
Rongrong Ji
CLIP3DPC
275
96
0
23 Jan 2020
Campfire: Compressible, Regularization-Free, Structured Sparse Training
  for Hardware Accelerators
Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators
Noah Gamboa
Kais Kudrolli
Anand Dhoot
A. Pedram
143
11
0
09 Jan 2020
Resource-Efficient Neural Networks for Embedded Systems
Resource-Efficient Neural Networks for Embedded Systems
Wolfgang Roth
Günther Schindler
Lukas Pfeifenberger
Robert Peharz
Sebastian Tschiatschek
Holger Fröning
Franz Pernkopf
Zoubin Ghahramani
280
65
0
07 Jan 2020
MoEVC: A Mixture-of-experts Voice Conversion System with Sparse Gating
  Mechanism for Accelerating Online Computation
MoEVC: A Mixture-of-experts Voice Conversion System with Sparse Gating Mechanism for Accelerating Online Computation
Yu-Tao Chang
Yuan-Hong Yang
Yu-Huai Peng
Syu-Siang Wang
T. Chi
Yu Tsao
Hsin-Min Wang
MoE
91
0
0
27 Dec 2019
DBP: Discrimination Based Block-Level Pruning for Deep Model
  Acceleration
DBP: Discrimination Based Block-Level Pruning for Deep Model Acceleration
Wenxiao Wang
Shuai Zhao
Minghao Chen
Jinming Hu
Deng Cai
Haifeng Liu
165
40
0
21 Dec 2019
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversionComputer Vision and Pattern Recognition (CVPR), 2019
Hongxu Yin
Pavlo Molchanov
Zhizhong Li
J. Álvarez
Arun Mallya
Derek Hoiem
N. Jha
Jan Kautz
432
651
0
18 Dec 2019
Diversifying Inference Path Selection: Moving-Mobile-Network for
  Landmark Recognition
Diversifying Inference Path Selection: Moving-Mobile-Network for Landmark RecognitionIEEE Transactions on Image Processing (TIP), 2019
Biao Qian
Yang Wang
Zhao Zhang
Richang Hong
Meng Wang
Ling Shao
98
12
0
01 Dec 2019
Pruning at a Glance: Global Neural Pruning for Model Compression
Pruning at a Glance: Global Neural Pruning for Model Compression
Abdullah Salama
O. Ostapenko
T. Klein
Moin Nabi
VLM
96
14
0
30 Nov 2019
GhostNet: More Features from Cheap Operations
GhostNet: More Features from Cheap OperationsComputer Vision and Pattern Recognition (CVPR), 2019
Kai Han
Yunhe Wang
Qi Tian
Jianyuan Guo
Chunjing Xu
Chang Xu
402
3,577
0
27 Nov 2019
Neural Network Pruning with Residual-Connections and Limited-Data
Neural Network Pruning with Residual-Connections and Limited-DataComputer Vision and Pattern Recognition (CVPR), 2019
Jian-Hao Luo
Jianxin Wu
274
134
0
19 Nov 2019
ASCAI: Adaptive Sampling for acquiring Compact AI
ASCAI: Adaptive Sampling for acquiring Compact AI
Mojan Javaheripi
Mohammad Samragh
T. Javidi
F. Koushanfar
102
2
0
15 Nov 2019
Knowledge Representing: Efficient, Sparse Representation of Prior
  Knowledge for Knowledge Distillation
Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation
Junjie Liu
Dongchao Wen
Hongxing Gao
Wei Tao
Tse-Wei Chen
Kinya Osa
Masami Kato
142
26
0
13 Nov 2019
NAT: Neural Architecture Transformer for Accurate and Compact
  Architectures
NAT: Neural Architecture Transformer for Accurate and Compact ArchitecturesNeural Information Processing Systems (NeurIPS), 2019
Yong Guo
Yin Zheng
Zhuliang Yu
Qi Chen
Jian Chen
P. Zhao
Junzhou Huang
424
92
0
31 Oct 2019
Building Efficient CNNs Using Depthwise Convolutional Eigen-Filters
  (DeCEF)
Building Efficient CNNs Using Depthwise Convolutional Eigen-Filters (DeCEF)
Yinan Yu
Samuel Scheidegger
T. McKelvey
199
3
0
21 Oct 2019
Differentiable Sparsification for Deep Neural Networks
Differentiable Sparsification for Deep Neural Networks
Yognjin Lee
282
7
0
08 Oct 2019
SensorDrop: A Reinforcement Learning Framework for Communication
  Overhead Reduction on the Edge
SensorDrop: A Reinforcement Learning Framework for Communication Overhead Reduction on the Edge
Pooya Khandel
Amir Hossein Rassafi
Niels Justesen
S. Risi
Julian Togelius
61
1
0
03 Oct 2019
Training convolutional neural networks with cheap convolutions and
  online distillation
Training convolutional neural networks with cheap convolutions and online distillation
Jiao Xie
Shaohui Lin
Yichen Zhang
Linkai Luo
184
14
0
28 Sep 2019
Reducing Transformer Depth on Demand with Structured Dropout
Reducing Transformer Depth on Demand with Structured DropoutInternational Conference on Learning Representations (ICLR), 2019
Angela Fan
Edouard Grave
Armand Joulin
611
656
0
25 Sep 2019
DASNet: Dynamic Activation Sparsity for Neural Network Efficiency
  Improvement
DASNet: Dynamic Activation Sparsity for Neural Network Efficiency ImprovementIEEE International Conference on Tools with Artificial Intelligence (ICTAI), 2019
Qing Yang
Jiachen Mao
Zuoguan Wang
Xue Yang
106
16
0
13 Sep 2019
Differentiable Mask for Pruning Convolutional and Recurrent Networks
Differentiable Mask for Pruning Convolutional and Recurrent NetworksCanadian Conference on Computer and Robot Vision (CRV), 2019
R. Ramakrishnan
Eyyub Sari
V. Nia
VLM
173
17
0
10 Sep 2019
VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep
  Residual Networks
VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep Residual Networks
Shuang Gao
Xin Liu
Lung-Sheng Chien
William Zhang
J. Álvarez
VLM3DPC
122
17
0
10 Sep 2019
PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for
  Real-time Execution on Mobile Devices
PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-time Execution on Mobile DevicesAAAI Conference on Artificial Intelligence (AAAI), 2019
Xiaolong Ma
Fu-Ming Guo
Wei Niu
Xue Lin
Jian Tang
Kaisheng Ma
Bin Ren
Yanzhi Wang
CVBM
450
190
0
06 Sep 2019
DeepHoyer: Learning Sparser Neural Network with Differentiable
  Scale-Invariant Sparsity Measures
DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity MeasuresInternational Conference on Learning Representations (ICLR), 2019
Huanrui Yang
W. Wen
Xue Yang
250
106
0
27 Aug 2019
Adaptative Inference Cost With Convolutional Neural Mixture Models
Adaptative Inference Cost With Convolutional Neural Mixture ModelsIEEE International Conference on Computer Vision (ICCV), 2019
Adria Ruiz
Jakob Verbeek
VLM
178
22
0
19 Aug 2019
Effective Training of Convolutional Neural Networks with Low-bitwidth
  Weights and Activations
Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and ActivationsIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019
Bohan Zhuang
Jing Liu
Zhuliang Yu
Lingqiao Liu
Ian Reid
Chunhua Shen
MQ
253
48
0
10 Aug 2019
Group Pruning using a Bounded-Lp norm for Group Gating and
  Regularization
Group Pruning using a Bounded-Lp norm for Group Gating and RegularizationGerman Conference on Pattern Recognition (DAGM), 2019
Chaithanya Kumar Mummadi
Tim Genewein
Dan Zhang
Thomas Brox
Volker Fischer
110
4
0
09 Aug 2019
Efficient Inference of CNNs via Channel Pruning
Efficient Inference of CNNs via Channel Pruning
Boyu Zhang
A. Davoodi
Y. Hu
CVBM
96
7
0
08 Aug 2019
Exploiting Channel Similarity for Accelerating Deep Convolutional Neural
  Networks
Exploiting Channel Similarity for Accelerating Deep Convolutional Neural Networks
Yunxiang Zhang
Chenglong Zhao
Bingbing Ni
Jian Zhang
Haoran Deng
103
2
0
06 Aug 2019
Importance Estimation for Neural Network Pruning
Importance Estimation for Neural Network PruningComputer Vision and Pattern Recognition (CVPR), 2019
Pavlo Molchanov
Arun Mallya
Stephen Tyree
I. Frosio
Jan Kautz
3DPC
308
1,044
0
25 Jun 2019
Parameterized Structured Pruning for Deep Neural Networks
Parameterized Structured Pruning for Deep Neural NetworksInternational Conference on Machine Learning, Optimization, and Data Science (MOD), 2019
Günther Schindler
Wolfgang Roth
Franz Pernkopf
Holger Froening
157
6
0
12 Jun 2019
Simultaneously Learning Architectures and Features of Deep Neural
  Networks
Simultaneously Learning Architectures and Features of Deep Neural NetworksInternational Conference on Artificial Neural Networks (ICANN), 2019
T. Wang
Lixin Fan
Huiling Wang
116
6
0
11 Jun 2019
Network Implosion: Effective Model Compression for ResNets via Static
  Layer Pruning and Retraining
Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and RetrainingIEEE International Joint Conference on Neural Network (IJCNN), 2019
Yasutoshi Ida
Yasuhiro Fujiwara
130
1
0
10 Jun 2019
Previous
123456
Next