ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.09862
  4. Cited By
Accelerator-Aware Pruning for Convolutional Neural Networks
v1v2v3 (latest)

Accelerator-Aware Pruning for Convolutional Neural Networks

26 April 2018
Hyeong-Ju Kang
ArXiv (abs)PDFHTMLGithub (2★)

Papers citing "Accelerator-Aware Pruning for Convolutional Neural Networks"

25 / 25 papers shown
KAN-SAs: Efficient Acceleration of Kolmogorov-Arnold Networks on Systolic Arrays
KAN-SAs: Efficient Acceleration of Kolmogorov-Arnold Networks on Systolic Arrays
Sohaib Errabii
Olivier Sentieys
Marcello Traiola
104
2
0
20 Nov 2025
PSE-Net: Channel Pruning for Convolutional Neural Networks with
  Parallel-subnets Estimator
PSE-Net: Channel Pruning for Convolutional Neural Networks with Parallel-subnets EstimatorNeural Networks (NN), 2024
Shiguang Wang
Tao Xie
Haijun Liu
Xingcheng Zhang
Jian Cheng
268
4
0
29 Aug 2024
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Simla Burcu Harma
Ayan Chakraborty
Elizaveta Kostenok
Danila Mishin
Dongho Ha
...
Martin Jaggi
Ming Liu
Yunho Oh
Suvinay Subramanian
Amir Yazdanbakhsh
MQ
464
21
0
31 May 2024
Sparse maximal update parameterization: A holistic approach to sparse training dynamics
Sparse maximal update parameterization: A holistic approach to sparse training dynamics
Nolan Dey
Shane Bergsma
Joel Hestness
394
10
0
24 May 2024
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of
  Deep Neural Networks
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural NetworksIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2024
Xue Geng
Zhe Wang
Chunyun Chen
Qing Xu
Kaixin Xu
...
Zhenghua Chen
M. Aly
Jie Lin
Ruibing Jin
Xiaoli Li
350
9
0
09 May 2024
Progressive Gradient Flow for Robust N:M Sparsity Training in
  Transformers
Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers
Abhimanyu Bambhaniya
Amir Yazdanbakhsh
Suvinay Subramanian
Sheng-Chun Kao
Shivani Agrawal
Utku Evci
Tushar Krishna
370
25
0
07 Feb 2024
FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
FPGA Resource-aware Structured Pruning for Real-Time Neural NetworksInternational Conference on Field-Programmable Technology (ICFPT), 2023
Benjamin Ramhorst
Vladimir Loncar
George A. Constantinides
257
13
0
09 Aug 2023
Resource Efficient Neural Networks Using Hessian Based Pruning
Resource Efficient Neural Networks Using Hessian Based Pruning
J. Chong
Manas Gupta
Lihui Chen
284
6
0
12 Jun 2023
Accelerator-Aware Training for Transducer-Based Speech Recognition
Accelerator-Aware Training for Transducer-Based Speech RecognitionSpoken Language Technology Workshop (SLT), 2023
Suhaila M. Shakiah
Rupak Vignesh Swaminathan
Hieu Duy Nguyen
Raviteja Chinta
Tariq Afzal
Nathan Susanj
Athanasios Mouchtaris
Grant P. Strimel
Ariya Rastrow
184
1
0
12 May 2023
Is Complexity Required for Neural Network Pruning? A Case Study on
  Global Magnitude Pruning
Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude PruningConference on Algebraic Informatics (CAI), 2022
Manas Gupta
Efe Camci
Vishandi Rudy Keneta
Abhishek Vaidyanathan
Ritwik Kanodia
Chuan-Sheng Foo
Wu Min
Lin Jie
384
24
0
29 Sep 2022
Training Recipe for N:M Structured Sparsity with Decaying Pruning Mask
Training Recipe for N:M Structured Sparsity with Decaying Pruning Mask
Sheng-Chun Kao
Amir Yazdanbakhsh
Suvinay Subramanian
Shivani Agrawal
Utku Evci
T. Krishna
408
15
0
15 Sep 2022
DNNShield: Dynamic Randomized Model Sparsification, A Defense Against
  Adversarial Machine Learning
DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning
Mohammad Hossein Samavatian
Saikat Majumdar
Kristin Barber
R. Teodorescu
AAML
184
2
0
31 Jul 2022
Binarizing by Classification: Is soft function really necessary?
Binarizing by Classification: Is soft function really necessary?
Yefei He
Luoming Zhang
Weijia Wu
Hong Zhou
MQ
527
4
0
16 May 2022
S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN
  Acceleration
S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN AccelerationInternational Symposium on High-Performance Computer Architecture (HPCA), 2021
Zhi-Gang Liu
P. Whatmough
Yuhao Zhu
Matthew Mattina
MQ
289
110
0
16 Jul 2021
HASI: Hardware-Accelerated Stochastic Inference, A Defense Against
  Adversarial Machine Learning Attacks
HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks
Mohammad Hossein Samavatian
Saikat Majumdar
Kristin Barber
R. Teodorescu
AAML
489
4
0
09 Jun 2021
Accelerating Sparse Deep Neural Networks
Accelerating Sparse Deep Neural Networks
Asit K. Mishra
J. Latorre
Jeff Pool
Darko Stosic
Dusan Stosic
Ganesh Venkatesh
Chong Yu
Paulius Micikevicius
547
313
0
16 Apr 2021
Pruning Filter in Filter
Pruning Filter in Filter
Fanxu Meng
Hao Cheng
Ke Li
Huixiang Luo
Xiao-Wei Guo
Guangming Lu
Xing Sun
VLM
359
125
0
30 Sep 2020
Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Zhi-Gang Liu
P. Whatmough
Matthew Mattina
343
15
0
04 Sep 2020
Layer-specific Optimization for Mixed Data Flow with Mixed Precision in
  FPGA Design for CNN-based Object Detectors
Layer-specific Optimization for Mixed Data Flow with Mixed Precision in FPGA Design for CNN-based Object Detectors
Duy-Thanh Nguyen
Hyun Kim
Hyuk-Jae Lee
MQ
218
80
0
03 Sep 2020
Weight-dependent Gates for Network Pruning
Weight-dependent Gates for Network Pruning
Yun Li
Zechun Liu
Weiqun Wu
Haotian Yao
Xinming Zhang
Fangqiu Yi
B. Yin
450
18
0
04 Jul 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML
  Models: A Survey and Insights
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
336
102
0
02 Jul 2020
EDCompress: Energy-Aware Model Compression for Dataflows
EDCompress: Energy-Aware Model Compression for Dataflows
Zhehui Wang
Yaoyu Zhang
Qiufeng Wang
Rick Siow Mong Goh
214
2
0
08 Jun 2020
SASL: Saliency-Adaptive Sparsity Learning for Neural Network
  Acceleration
SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration
Jun Shi
Jianfeng Xu
K. Tasaka
Zhibo Chen
243
26
0
12 Mar 2020
Lightweight Convolutional Representations for On-Device Natural Language
  Processing
Lightweight Convolutional Representations for On-Device Natural Language Processing
Shrey Desai
Geoffrey Goh
Arun Babu
Ahmed Aly
AI4TS
212
8
0
04 Feb 2020
Open DNN Box by Power Side-Channel Attack
Open DNN Box by Power Side-Channel Attack
Yun Xiang
Zhuangzhi Chen
Zuohui Chen
Zebin Fang
Haiyang Hao
Jinyin Chen
Yi Liu
Zhefu Wu
Qi Xuan
Xiaoniu Yang
AAML
182
108
0
21 Jul 2019
1
Page 1 of 1