ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07853
  4. Cited By
Loom: Exploiting Weight and Activation Precisions to Accelerate
  Convolutional Neural Networks

Loom: Exploiting Weight and Activation Precisions to Accelerate Convolutional Neural Networks

23 June 2017
Sayeh Sharify
Alberto Delmas Lascorz
Kevin Siu
Patrick Judd
Andreas Moshovos
    MQ
ArXivPDFHTML

Papers citing "Loom: Exploiting Weight and Activation Precisions to Accelerate Convolutional Neural Networks"

12 / 12 papers shown
Title
Accelerating Attention through Gradient-Based Learned Runtime Pruning
Accelerating Attention through Gradient-Based Learned Runtime Pruning
Zheng Li
Soroush Ghodrati
Amir Yazdanbakhsh
H. Esmaeilzadeh
Mingu Kang
32
17
0
07 Apr 2022
FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block
  Floating Point Support
FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support
Seock-Hwan Noh
Jahyun Koo
Seunghyun Lee
Jongse Park
Jaeha Kung
AI4CE
32
17
0
13 Mar 2022
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency
Yonggan Fu
Yang Zhao
Qixuan Yu
Chaojian Li
Yingyan Lin
AAML
54
12
0
11 Sep 2021
Hardware and Software Optimizations for Accelerating Deep Neural
  Networks: Survey of Current Trends, Challenges, and the Road Ahead
Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead
Maurizio Capra
Beatrice Bussolino
Alberto Marchisio
Guido Masera
Maurizio Martina
Mohamed Bennai
BDL
64
140
0
21 Dec 2020
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and
  Head Pruning
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning
Hanrui Wang
Zhekai Zhang
Song Han
48
380
0
17 Dec 2020
FPRaker: A Processing Element For Accelerating Neural Network Training
FPRaker: A Processing Element For Accelerating Neural Network Training
Omar Mohamed Awad
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Ciaran Bannon
Anand Jayarajan
Gennady Pekhimenko
Andreas Moshovos
28
15
0
15 Oct 2020
Always-On 674uW @ 4GOP/s Error Resilient Binary Neural Networks with
  Aggressive SRAM Voltage Scaling on a 22nm IoT End-Node
Always-On 674uW @ 4GOP/s Error Resilient Binary Neural Networks with Aggressive SRAM Voltage Scaling on a 22nm IoT End-Node
Alfio Di Mauro
Francesco Conti
Pasquale Davide Schiavone
D. Rossi
Luca Benini
26
9
0
17 Jul 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML
  Models: A Survey and Insights
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
64
82
0
02 Jul 2020
EDD: Efficient Differentiable DNN Architecture and Implementation
  Co-search for Embedded AI Solutions
EDD: Efficient Differentiable DNN Architecture and Implementation Co-search for Embedded AI Solutions
Yuhong Li
Cong Hao
Xiaofan Zhang
Xinheng Liu
Yao Chen
Jinjun Xiong
Wen-mei W. Hwu
Deming Chen
40
77
0
06 May 2020
Progressive Weight Pruning of Deep Neural Networks using ADMM
Progressive Weight Pruning of Deep Neural Networks using ADMM
Shaokai Ye
Tianyun Zhang
Kaiqi Zhang
Jiayu Li
Kaidi Xu
...
M. Fardad
Sijia Liu
Xiang Chen
Xinyu Lin
Yanzhi Wang
AI4CE
37
38
0
17 Oct 2018
Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on
  Mobile Devices
Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices
Yu-hsin Chen
Tien-Ju Yang
J. Emer
Vivienne Sze
MQ
18
70
0
10 Jul 2018
Bit Fusion: Bit-Level Dynamically Composable Architecture for
  Accelerating Deep Neural Networks
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks
Hardik Sharma
Jongse Park
Naveen Suda
Liangzhen Lai
Benson Chau
Joo-Young Kim
Vikas Chandra
H. Esmaeilzadeh
MQ
32
488
0
05 Dec 2017
1