Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1711.01263
Cited By
SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
3 November 2017
Jingyang Zhu
Jingbo Jiang
Xizi Chen
Chi-Ying Tsui
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity"
8 / 8 papers shown
Title
Equitable-FL: Federated Learning with Sparsity for Resource-Constrained Environment
Indrajeet Kumar Sinha
Shekhar Verma
Krishna Pratap Singh
FedML
230
0
0
02 Sep 2023
LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and Inter-Layer Gradient Pipelining and Multiprocessor Scheduling
Nanda K. Unnikrishnan
Keshab K. Parhi
AI4CE
78
10
0
14 Aug 2021
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
267
98
0
02 Jul 2020
Computation on Sparse Neural Networks: an Inspiration for Future Hardware
Fei Sun
Minghai Qin
Tianyun Zhang
Liu Liu
Yen-kuang Chen
Yuan Xie
284
7
0
24 Apr 2020
Non-Blocking Simultaneous Multithreading: Embracing the Resiliency of Deep Neural Networks
Micro (MICRO), 2020
Gil Shomron
U. Weiser
156
16
0
17 Apr 2020
Software-defined Design Space Exploration for an Efficient DNN Accelerator Architecture
IEEE transactions on computers (IEEE Trans. Comput.), 2019
Y. Yu
Yingmin Li
Shuai Che
N. Jha
Weifeng Zhang
137
22
0
18 Mar 2019
Sparsity in Deep Neural Networks - An Empirical Investigation with TensorQuant
D. Loroch
Franz-Josef Pfreundt
Norbert Wehn
J. Keuper
94
5
0
27 Aug 2018
Full deep neural network training on a pruned weight budget
Maximilian Golub
G. Lemieux
Mieszko Lis
210
30
0
11 Jun 2018
1