ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.01213
  4. Cited By
Data-Driven Sparse Structure Selection for Deep Neural Networks

Data-Driven Sparse Structure Selection for Deep Neural Networks

5 July 2017
Zehao Huang
Naiyan Wang
ArXivPDFHTML

Papers citing "Data-Driven Sparse Structure Selection for Deep Neural Networks"

50 / 273 papers shown
Title
SASL: Saliency-Adaptive Sparsity Learning for Neural Network
  Acceleration
SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration
Jun Shi
Jianfeng Xu
K. Tasaka
Zhibo Chen
4
25
0
12 Mar 2020
Channel Pruning via Optimal Thresholding
Channel Pruning via Optimal Thresholding
Yun Ye
Ganmei You
Jong-Kae Fwu
Xia Zhu
Q. Yang
Yuan Zhu
6
12
0
10 Mar 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
183
1,027
0
06 Mar 2020
Anytime Inference with Distilled Hierarchical Neural Ensembles
Anytime Inference with Distilled Hierarchical Neural Ensembles
Adria Ruiz
Jakob Verbeek
UQCV
BDL
FedML
44
6
0
03 Mar 2020
Iterative Averaging in the Quest for Best Test Error
Iterative Averaging in the Quest for Best Test Error
Diego Granziol
Xingchen Wan
Samuel Albanie
Stephen J. Roberts
6
3
0
02 Mar 2020
Deep Learning for Biomedical Image Reconstruction: A Survey
Deep Learning for Biomedical Image Reconstruction: A Survey
Hanene Ben Yedder
Ben Cardoen
Ghassan Hamarneh
MedIm
3DV
12
92
0
26 Feb 2020
HRank: Filter Pruning using High-Rank Feature Map
HRank: Filter Pruning using High-Rank Feature Map
Mingbao Lin
Rongrong Ji
Yan Wang
Yichen Zhang
Baochang Zhang
Yonghong Tian
Ling Shao
8
714
0
24 Feb 2020
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by
  Enabling Input-Adaptive Inference
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference
Ting-Kuei Hu
Tianlong Chen
Haotao Wang
Zhangyang Wang
OOD
AAML
3DH
4
84
0
24 Feb 2020
Knapsack Pruning with Inner Distillation
Knapsack Pruning with Inner Distillation
Y. Aflalo
Asaf Noy
Ming Lin
Itamar Friedman
Lihi Zelnik-Manor
3DPC
15
34
0
19 Feb 2020
Soft Threshold Weight Reparameterization for Learnable Sparsity
Soft Threshold Weight Reparameterization for Learnable Sparsity
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
14
241
0
08 Feb 2020
Search for Better Students to Learn Distilled Knowledge
Search for Better Students to Learn Distilled Knowledge
Jindong Gu
Volker Tresp
12
18
0
30 Jan 2020
Channel Pruning via Automatic Structure Search
Channel Pruning via Automatic Structure Search
Mingbao Lin
Rongrong Ji
Yu-xin Zhang
Baochang Zhang
Yongjian Wu
Yonghong Tian
68
241
0
23 Jan 2020
Filter Sketch for Network Pruning
Filter Sketch for Network Pruning
Mingbao Lin
Liujuan Cao
Shaojie Li
QiXiang Ye
Yonghong Tian
Jianzhuang Liu
Q. Tian
Rongrong Ji
CLIP
3DPC
10
82
0
23 Jan 2020
Campfire: Compressible, Regularization-Free, Structured Sparse Training
  for Hardware Accelerators
Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators
Noah Gamboa
Kais Kudrolli
Anand Dhoot
A. Pedram
6
10
0
09 Jan 2020
Resource-Efficient Neural Networks for Embedded Systems
Resource-Efficient Neural Networks for Embedded Systems
Wolfgang Roth
Günther Schindler
Lukas Pfeifenberger
Robert Peharz
Sebastian Tschiatschek
Holger Fröning
Franz Pernkopf
Zoubin Ghahramani
26
47
0
07 Jan 2020
MoEVC: A Mixture-of-experts Voice Conversion System with Sparse Gating
  Mechanism for Accelerating Online Computation
MoEVC: A Mixture-of-experts Voice Conversion System with Sparse Gating Mechanism for Accelerating Online Computation
Yu-Tao Chang
Yuan-Hong Yang
Yu-Huai Peng
Syu-Siang Wang
T. Chi
Yu Tsao
Hsin-Min Wang
MoE
16
0
0
27 Dec 2019
DBP: Discrimination Based Block-Level Pruning for Deep Model
  Acceleration
DBP: Discrimination Based Block-Level Pruning for Deep Model Acceleration
Wenxiao Wang
Shuai Zhao
Minghao Chen
Jinming Hu
Deng Cai
Haifeng Liu
8
35
0
21 Dec 2019
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion
Hongxu Yin
Pavlo Molchanov
Zhizhong Li
J. Álvarez
Arun Mallya
Derek Hoiem
N. Jha
Jan Kautz
15
551
0
18 Dec 2019
Diversifying Inference Path Selection: Moving-Mobile-Network for
  Landmark Recognition
Diversifying Inference Path Selection: Moving-Mobile-Network for Landmark Recognition
Biao Qian
Yang Wang
Zhao Zhang
Richang Hong
Meng Wang
Ling Shao
13
12
0
01 Dec 2019
Pruning at a Glance: Global Neural Pruning for Model Compression
Pruning at a Glance: Global Neural Pruning for Model Compression
Abdullah Salama
O. Ostapenko
T. Klein
Moin Nabi
VLM
6
12
0
30 Nov 2019
GhostNet: More Features from Cheap Operations
GhostNet: More Features from Cheap Operations
Kai Han
Yunhe Wang
Qi Tian
Jianyuan Guo
Chunjing Xu
Chang Xu
18
2,575
0
27 Nov 2019
Neural Network Pruning with Residual-Connections and Limited-Data
Neural Network Pruning with Residual-Connections and Limited-Data
Jian-Hao Luo
Jianxin Wu
14
112
0
19 Nov 2019
ASCAI: Adaptive Sampling for acquiring Compact AI
ASCAI: Adaptive Sampling for acquiring Compact AI
Mojan Javaheripi
Mohammad Samragh
T. Javidi
F. Koushanfar
15
2
0
15 Nov 2019
Knowledge Representing: Efficient, Sparse Representation of Prior
  Knowledge for Knowledge Distillation
Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation
Junjie Liu
Dongchao Wen
Hongxing Gao
Wei Tao
Tse-Wei Chen
Kinya Osa
Masami Kato
14
21
0
13 Nov 2019
NAT: Neural Architecture Transformer for Accurate and Compact
  Architectures
NAT: Neural Architecture Transformer for Accurate and Compact Architectures
Yong Guo
Yin Zheng
Mingkui Tan
Qi Chen
Jian Chen
P. Zhao
Junzhou Huang
19
82
0
31 Oct 2019
Building Efficient CNNs Using Depthwise Convolutional Eigen-Filters
  (DeCEF)
Building Efficient CNNs Using Depthwise Convolutional Eigen-Filters (DeCEF)
Yinan Yu
Samuel Scheidegger
T. McKelvey
6
2
0
21 Oct 2019
Differentiable Sparsification for Deep Neural Networks
Differentiable Sparsification for Deep Neural Networks
Yognjin Lee
12
7
0
08 Oct 2019
SensorDrop: A Reinforcement Learning Framework for Communication
  Overhead Reduction on the Edge
SensorDrop: A Reinforcement Learning Framework for Communication Overhead Reduction on the Edge
Pooya Khandel
Amir Hossein Rassafi
Niels Justesen
S. Risi
Julian Togelius
16
1
0
03 Oct 2019
Training convolutional neural networks with cheap convolutions and
  online distillation
Training convolutional neural networks with cheap convolutions and online distillation
Jiao Xie
Shaohui Lin
Yichen Zhang
Linkai Luo
11
12
0
28 Sep 2019
Reducing Transformer Depth on Demand with Structured Dropout
Reducing Transformer Depth on Demand with Structured Dropout
Angela Fan
Edouard Grave
Armand Joulin
19
584
0
25 Sep 2019
DASNet: Dynamic Activation Sparsity for Neural Network Efficiency
  Improvement
DASNet: Dynamic Activation Sparsity for Neural Network Efficiency Improvement
Qing Yang
Jiachen Mao
Zuoguan Wang
H. Li
13
15
0
13 Sep 2019
Differentiable Mask for Pruning Convolutional and Recurrent Networks
Differentiable Mask for Pruning Convolutional and Recurrent Networks
R. Ramakrishnan
Eyyub Sari
V. Nia
VLM
24
15
0
10 Sep 2019
VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep
  Residual Networks
VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep Residual Networks
Shuang Gao
Xin Liu
Lung-Sheng Chien
William Zhang
J. Álvarez
VLM
3DPC
11
15
0
10 Sep 2019
PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for
  Real-time Execution on Mobile Devices
PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-time Execution on Mobile Devices
Xiaolong Ma
Fu-Ming Guo
Wei Niu
Xue Lin
Jian Tang
Kaisheng Ma
Bin Ren
Yanzhi Wang
CVBM
14
173
0
06 Sep 2019
DeepHoyer: Learning Sparser Neural Network with Differentiable
  Scale-Invariant Sparsity Measures
DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures
Huanrui Yang
W. Wen
H. Li
6
96
0
27 Aug 2019
Adaptative Inference Cost With Convolutional Neural Mixture Models
Adaptative Inference Cost With Convolutional Neural Mixture Models
Adria Ruiz
Jakob Verbeek
VLM
22
22
0
19 Aug 2019
Effective Training of Convolutional Neural Networks with Low-bitwidth
  Weights and Activations
Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations
Bohan Zhuang
Jing Liu
Mingkui Tan
Lingqiao Liu
Ian Reid
Chunhua Shen
MQ
17
44
0
10 Aug 2019
Group Pruning using a Bounded-Lp norm for Group Gating and
  Regularization
Group Pruning using a Bounded-Lp norm for Group Gating and Regularization
Chaithanya Kumar Mummadi
Tim Genewein
Dan Zhang
Thomas Brox
Volker Fischer
27
3
0
09 Aug 2019
Efficient Inference of CNNs via Channel Pruning
Efficient Inference of CNNs via Channel Pruning
Boyu Zhang
A. Davoodi
Y. Hu
CVBM
11
6
0
08 Aug 2019
Exploiting Channel Similarity for Accelerating Deep Convolutional Neural
  Networks
Exploiting Channel Similarity for Accelerating Deep Convolutional Neural Networks
Yunxiang Zhang
Chenglong Zhao
Bingbing Ni
Jian Zhang
Haoran Deng
20
2
0
06 Aug 2019
Importance Estimation for Neural Network Pruning
Importance Estimation for Neural Network Pruning
Pavlo Molchanov
Arun Mallya
Stephen Tyree
I. Frosio
Jan Kautz
3DPC
8
855
0
25 Jun 2019
Parameterized Structured Pruning for Deep Neural Networks
Parameterized Structured Pruning for Deep Neural Networks
Günther Schindler
Wolfgang Roth
Franz Pernkopf
Holger Froening
16
6
0
12 Jun 2019
Simultaneously Learning Architectures and Features of Deep Neural
  Networks
Simultaneously Learning Architectures and Features of Deep Neural Networks
T. Wang
Lixin Fan
Huiling Wang
8
6
0
11 Jun 2019
Network Implosion: Effective Model Compression for ResNets via Static
  Layer Pruning and Retraining
Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining
Yasutoshi Ida
Yasuhiro Fujiwara
24
1
0
10 Jun 2019
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural
  Networks
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks
Jiashi Li
Q. Qi
Jingyu Wang
Ce Ge
Yujian Betterest Li
Zhangzhang Yue
Haifeng Sun
BDL
CML
16
53
0
28 May 2019
Towards Efficient Model Compression via Learned Global Ranking
Towards Efficient Model Compression via Learned Global Ranking
Ting-Wu Chin
Ruizhou Ding
Cha Zhang
Diana Marculescu
10
170
0
28 Apr 2019
Data-Driven Neuron Allocation for Scale Aggregation Networks
Data-Driven Neuron Allocation for Scale Aggregation Networks
Yi Li
Zhanghui Kuang
Yimin Chen
Wayne Zhang
19
26
0
20 Apr 2019
ThumbNet: One Thumbnail Image Contains All You Need for Recognition
ThumbNet: One Thumbnail Image Contains All You Need for Recognition
Chen Zhao
Bernard Ghanem
14
10
0
10 Apr 2019
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers
Jiahui Yu
Thomas Huang
18
56
0
27 Mar 2019
MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning
MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning
Zechun Liu
Haoyuan Mu
Xiangyu Zhang
Zichao Guo
Xin Yang
K. Cheng
Jian-jun Sun
4
554
0
25 Mar 2019
Previous
123456
Next