ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.01409
  4. Cited By
Faster CNNs with Direct Sparse Convolutions and Guided Pruning

Faster CNNs with Direct Sparse Convolutions and Guided Pruning

4 August 2016
Jongsoo Park
Sheng R. Li
W. Wen
P. T. P. Tang
Hai Helen Li
Yiran Chen
Pradeep Dubey
ArXivPDFHTML

Papers citing "Faster CNNs with Direct Sparse Convolutions and Guided Pruning"

15 / 15 papers shown
Title
A Generic Layer Pruning Method for Signal Modulation Recognition Deep
  Learning Models
A Generic Layer Pruning Method for Signal Modulation Recognition Deep Learning Models
Yao Lu
Yutao Zhu
Yuqi Li
Dongwei Xu
Yun Lin
Qi Xuan
Xiaoniu Yang
23
5
0
12 Jun 2024
Neural Networks at a Fraction with Pruned Quaternions
Neural Networks at a Fraction with Pruned Quaternions
Sahel Mohammad Iqbal
Subhankar Mishra
22
4
0
13 Aug 2023
STen: Productive and Efficient Sparsity in PyTorch
STen: Productive and Efficient Sparsity in PyTorch
Andrei Ivanov
Nikoli Dryden
Tal Ben-Nun
Saleh Ashkboos
Torsten Hoefler
30
4
0
15 Apr 2023
TDC: Towards Extremely Efficient CNNs on GPUs via Hardware-Aware Tucker
  Decomposition
TDC: Towards Extremely Efficient CNNs on GPUs via Hardware-Aware Tucker Decomposition
Lizhi Xiang
Miao Yin
Chengming Zhang
Aravind Sukumaran-Rajam
P. Sadayappan
Bo Yuan
Dingwen Tao
3DV
18
8
0
07 Nov 2022
Doge Tickets: Uncovering Domain-general Language Models by Playing
  Lottery Tickets
Doge Tickets: Uncovering Domain-general Language Models by Playing Lottery Tickets
Yi Yang
Chen Zhang
Benyou Wang
Dawei Song
LRM
8
6
0
20 Jul 2022
Speedup deep learning models on GPU by taking advantage of efficient
  unstructured pruning and bit-width reduction
Speedup deep learning models on GPU by taking advantage of efficient unstructured pruning and bit-width reduction
Marcin Pietroñ
Dominik Zurek
14
13
0
28 Dec 2021
Architecture Aware Latency Constrained Sparse Neural Networks
Architecture Aware Latency Constrained Sparse Neural Networks
Tianli Zhao
Qinghao Hu
Xiangyu He
Weixiang Xu
Jiaxing Wang
Cong Leng
Jian Cheng
28
0
0
01 Sep 2021
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and
  Head Pruning
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning
Hanrui Wang
Zhekai Zhang
Song Han
11
373
0
17 Dec 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
222
382
0
05 Mar 2020
HRank: Filter Pruning using High-Rank Feature Map
HRank: Filter Pruning using High-Rank Feature Map
Mingbao Lin
Rongrong Ji
Yan Wang
Yichen Zhang
Baochang Zhang
Yonghong Tian
Ling Shao
8
714
0
24 Feb 2020
Similarity-Preserving Knowledge Distillation
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
39
957
0
23 Jul 2019
Perturbative Neural Networks
Perturbative Neural Networks
Felix Juefei Xu
Vishnu Naresh Boddeti
Marios Savvides
14
37
0
05 Jun 2018
IGCV$2$: Interleaved Structured Sparse Convolutional Neural Networks
IGCV222: Interleaved Structured Sparse Convolutional Neural Networks
Guotian Xie
Jingdong Wang
Ting Zhang
Jianhuang Lai
Richang Hong
Guo-Jun Qi
8
104
0
17 Apr 2018
Learning Intrinsic Sparse Structures within Long Short-Term Memory
Learning Intrinsic Sparse Structures within Long Short-Term Memory
W. Wen
Yuxiong He
Samyam Rajbhandari
Minjia Zhang
Wenhan Wang
Fang Liu
Bin Hu
Yiran Chen
H. Li
MQ
24
140
0
15 Sep 2017
Speeding up Convolutional Neural Networks By Exploiting the Sparsity of
  Rectifier Units
Speeding up Convolutional Neural Networks By Exploiting the Sparsity of Rectifier Units
S. Shi
Xiaowen Chu
15
43
0
25 Apr 2017
1