ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.06508
  4. Cited By
UCNN: Exploiting Computational Reuse in Deep Neural Networks via Weight
  Repetition

UCNN: Exploiting Computational Reuse in Deep Neural Networks via Weight Repetition

18 April 2018
Kartik Hegde
Jiyong Yu
R. Agrawal
Mengjia Yan
Michael Pellauer
Christopher W. Fletcher
ArXivPDFHTML

Papers citing "UCNN: Exploiting Computational Reuse in Deep Neural Networks via Weight Repetition"

16 / 16 papers shown
Title
PLUM: Improving Inference Efficiency By Leveraging Repetition-Sparsity Trade-Off
PLUM: Improving Inference Efficiency By Leveraging Repetition-Sparsity Trade-Off
Sachit Kuhar
Yash Jain
Alexey Tumanov
MQ
54
0
0
04 Dec 2023
Signed Binary Weight Networks
Sachit Kuhar
Alexey Tumanov
Judy Hoffman
MQ
13
1
0
25 Nov 2022
Survey: Exploiting Data Redundancy for Optimization of Deep Learning
Survey: Exploiting Data Redundancy for Optimization of Deep Learning
Jou-An Chen
Wei Niu
Bin Ren
Yanzhi Wang
Xipeng Shen
23
24
0
29 Aug 2022
E^2TAD: An Energy-Efficient Tracking-based Action Detector
E^2TAD: An Energy-Efficient Tracking-based Action Detector
Xin Hu
Zhenyu Wu
Haoyuan Miao
Siqi Fan
Taiyu Long
...
Pengcheng Pi
Yi Wu
Zhou Ren
Zhangyang Wang
G. Hua
19
2
0
09 Apr 2022
Deep Neural Networks Based Weight Approximation and Computation Reuse
  for 2-D Image Classification
Deep Neural Networks Based Weight Approximation and Computation Reuse for 2-D Image Classification
M. Tolba
H. Tesfai
H. Saleh
B. Mohammad
Mahmoud Al-Qutayri
16
4
0
28 Apr 2021
Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space
  Search
Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search
Kartik Hegde
Po-An Tsai
Sitao Huang
Vikas Chandra
A. Parashar
Christopher W. Fletcher
26
90
0
02 Mar 2021
FPRaker: A Processing Element For Accelerating Neural Network Training
FPRaker: A Processing Element For Accelerating Neural Network Training
Omar Mohamed Awad
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Ciaran Bannon
Anand Jayarajan
Gennady Pekhimenko
Andreas Moshovos
20
15
0
15 Oct 2020
Computing Graph Neural Networks: A Survey from Algorithms to
  Accelerators
Computing Graph Neural Networks: A Survey from Algorithms to Accelerators
S. Abadal
Akshay Jain
Robert Guirado
Jorge López-Alonso
Eduard Alarcón
GNN
27
225
0
30 Sep 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML
  Models: A Survey and Insights
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
46
81
0
02 Jul 2020
Lupulus: A Flexible Hardware Accelerator for Neural Networks
Lupulus: A Flexible Hardware Accelerator for Neural Networks
Andreas Toftegaard Kristensen
R. Giterman
Alexios Balatsoukas-Stimming
A. Burg
31
0
0
03 May 2020
A Pre-defined Sparse Kernel Based Convolution for Deep CNNs
A Pre-defined Sparse Kernel Based Convolution for Deep CNNs
Souvik Kundu
Saurav Prakash
H. Akrami
P. Beerel
K. Chugg
28
12
0
02 Oct 2019
VarGNet: Variable Group Convolutional Neural Network for Efficient
  Embedded Computing
VarGNet: Variable Group Convolutional Neural Network for Efficient Embedded Computing
Qian Zhang
Jianjun Li
Meng Yao
Liangchen Song
Helong Zhou
Zhichao Li
Wenming Meng
Xuezhi Zhang
Guoli Wang
13
22
0
12 Jul 2019
Towards Fast and Energy-Efficient Binarized Neural Network Inference on
  FPGA
Towards Fast and Energy-Efficient Binarized Neural Network Inference on FPGA
Cheng Fu
Shilin Zhu
Hao Su
Ching-En Lee
Jishen Zhao
MQ
15
31
0
04 Oct 2018
SECS: Efficient Deep Stream Processing via Class Skew Dichotomy
SECS: Efficient Deep Stream Processing via Class Skew Dichotomy
Boyuan Feng
Kun Wan
Shu Yang
Yufei Ding
23
4
0
07 Sep 2018
RAPIDNN: In-Memory Deep Neural Network Acceleration Framework
RAPIDNN: In-Memory Deep Neural Network Acceleration Framework
Mohsen Imani
Mohammad Samragh
Yeseong Kim
Saransh Gupta
F. Koushanfar
Tajana Simunic
16
51
0
15 Jun 2018
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
316
1,047
0
10 Feb 2017
1