ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.04770
  4. Cited By
Packing Sparse Convolutional Neural Networks for Efficient Systolic
  Array Implementations: Column Combining Under Joint Optimization

Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization

7 November 2018
H. T. Kung
Bradley McDanel
S. Zhang
ArXivPDFHTML

Papers citing "Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization"

12 / 12 papers shown
Title
Workload-Balanced Pruning for Sparse Spiking Neural Networks
Workload-Balanced Pruning for Sparse Spiking Neural Networks
Ruokai Yin
Youngeun Kim
Yuhang Li
Abhishek Moitra
Nitin Satpute
Anna Hambitzer
Priyadarshini Panda
34
19
0
13 Feb 2023
Improving Reliability of Spiking Neural Networks through Fault Aware
  Threshold Voltage Optimization
Improving Reliability of Spiking Neural Networks through Fault Aware Threshold Voltage Optimization
Ayesha Siddique
K. A. Hoque
19
5
0
12 Jan 2023
Accelerating DNN Training with Structured Data Gradient Pruning
Accelerating DNN Training with Structured Data Gradient Pruning
Bradley McDanel
Helia Dinh
J. Magallanes
6
7
0
01 Feb 2022
Two Sparsities Are Better Than One: Unlocking the Performance Benefits
  of Sparse-Sparse Networks
Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks
Kevin Lee Hunter
Lawrence Spracklen
Subutai Ahmad
23
20
0
27 Dec 2021
SPA-GCN: Efficient and Flexible GCN Accelerator with an Application for
  Graph Similarity Computation
SPA-GCN: Efficient and Flexible GCN Accelerator with an Application for Graph Similarity Computation
Atefeh Sohrabizadeh
Yuze Chi
Jason Cong
GNN
29
1
0
10 Nov 2021
VersaGNN: a Versatile accelerator for Graph neural networks
VersaGNN: a Versatile accelerator for Graph neural networks
Feng Shi
Yiqiao Jin
Song-Chun Zhu
GNN
58
17
0
04 May 2021
FPRaker: A Processing Element For Accelerating Neural Network Training
FPRaker: A Processing Element For Accelerating Neural Network Training
Omar Mohamed Awad
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Ciaran Bannon
Anand Jayarajan
Gennady Pekhimenko
Andreas Moshovos
20
15
0
15 Oct 2020
Accelerating Sparse DNN Models without Hardware-Support via Tile-Wise
  Sparsity
Accelerating Sparse DNN Models without Hardware-Support via Tile-Wise Sparsity
Cong Guo
B. Hsueh
Jingwen Leng
Yuxian Qiu
Yue Guan
Zehuan Wang
Xiaoying Jia
Xipeng Li
M. Guo
Yuhao Zhu
35
83
0
29 Aug 2020
Term Revealing: Furthering Quantization at Run Time on Quantized DNNs
Term Revealing: Furthering Quantization at Run Time on Quantized DNNs
H. T. Kung
Bradley McDanel
S. Zhang
MQ
13
9
0
13 Jul 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML
  Models: A Survey and Insights
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
56
81
0
02 Jul 2020
Computation on Sparse Neural Networks: an Inspiration for Future
  Hardware
Computation on Sparse Neural Networks: an Inspiration for Future Hardware
Fei Sun
Minghai Qin
Tianyun Zhang
Liu Liu
Yen-kuang Chen
Yuan Xie
29
7
0
24 Apr 2020
Mean teachers are better role models: Weight-averaged consistency
  targets improve semi-supervised deep learning results
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
261
1,275
0
06 Mar 2017
1