ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.06305
  4. Cited By
C-LSTM: Enabling Efficient LSTM using Structured Compression Techniques
  on FPGAs

C-LSTM: Enabling Efficient LSTM using Structured Compression Techniques on FPGAs

14 March 2018
Shuo Wang
Zhe Li
Caiwen Ding
Bo Yuan
Yanzhi Wang
Qinru Qiu
Yun Liang
ArXivPDFHTML

Papers citing "C-LSTM: Enabling Efficient LSTM using Structured Compression Techniques on FPGAs"

20 / 20 papers shown
Title
Parameter-Efficient Fine-Tuning with Circulant and Diagonal Vectors
Parameter-Efficient Fine-Tuning with Circulant and Diagonal Vectors
Xinyu Ding
Lexuan Chen
Siyu Liao
Zhongfeng Wang
45
0
0
01 May 2025
Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for
  Video Recognition with Hierarchical Tucker Tensor Decomposition
Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition
Yu Gong
Miao Yin
Lingyi Huang
Chunhua Deng
Yang Sui
Bo Yuan
22
6
0
05 Dec 2022
Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and
  Algorithm Co-design
Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design
Hongxiang Fan
Thomas C. P. Chau
Stylianos I. Venieris
Royson Lee
Alexandros Kouris
Wayne Luk
Nicholas D. Lane
Mohamed S. Abdelfattah
34
56
0
20 Sep 2022
RecLight: A Recurrent Neural Network Accelerator with Integrated Silicon
  Photonics
RecLight: A Recurrent Neural Network Accelerator with Integrated Silicon Photonics
Febin P. Sunny
Mahdi Nikdast
S. Pasricha
17
17
0
31 Aug 2022
Vau da muntanialas: Energy-efficient multi-die scalable acceleration of
  RNN inference
Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference
G. Paulin
Francesco Conti
Lukas Cavigelli
Luca Benini
22
8
0
14 Feb 2022
Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting
  Spatio-Temporal Sparsity
Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-Temporal Sparsity
Chang Gao
T. Delbruck
Shih-Chii Liu
19
44
0
04 Aug 2021
MOHAQ: Multi-Objective Hardware-Aware Quantization of Recurrent Neural
  Networks
MOHAQ: Multi-Objective Hardware-Aware Quantization of Recurrent Neural Networks
Nesma M. Rezk
Tomas Nordstrom
D. Stathis
Z. Ul-Abdin
E. Aksoy
A. Hemani
MQ
20
1
0
02 Aug 2021
BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant
  Weight Matrices
BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant Weight Matrices
Zhe Zhou
Bizhao Shi
Zhe Zhang
Yijin Guan
Guangyu Sun
Guojie Luo
GNN
31
32
0
13 Apr 2021
BRDS: An FPGA-based LSTM Accelerator with Row-Balanced Dual-Ratio
  Sparsification
BRDS: An FPGA-based LSTM Accelerator with Row-Balanced Dual-Ratio Sparsification
Seyed Abolfazl Ghasemzadeh
E. Tavakoli
M. Kamal
A. Afzali-Kusha
Massoud Pedram
11
13
0
07 Jan 2021
FTRANS: Energy-Efficient Acceleration of Transformers using FPGA
FTRANS: Energy-Efficient Acceleration of Transformers using FPGA
Bingbing Li
Santosh Pandey
Haowen Fang
Yanjun Lyv
Ji Li
Jieyang Chen
Mimi Xie
Lipeng Wan
Hang Liu
Caiwen Ding
AI4CE
6
168
0
16 Jul 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML
  Models: A Survey and Insights
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
46
81
0
02 Jul 2020
CSB-RNN: A Faster-than-Realtime RNN Acceleration Framework with
  Compressed Structured Blocks
CSB-RNN: A Faster-than-Realtime RNN Acceleration Framework with Compressed Structured Blocks
Runbin Shi
Peiyan Dong
Tong Geng
Yuhao Ding
Xiaolong Ma
Hayden Kwok-Hay So
Martin C. Herbordt
Ang Li
Yanzhi Wang
MQ
10
13
0
11 May 2020
Small-Footprint Open-Vocabulary Keyword Spotting with Quantized LSTM
  Networks
Small-Footprint Open-Vocabulary Keyword Spotting with Quantized LSTM Networks
Théodore Bluche
Maël Primet
Thibault Gisselbrecht
ObjD
MQ
20
24
0
25 Feb 2020
Taurus: A Data Plane Architecture for Per-Packet ML
Taurus: A Data Plane Architecture for Per-Packet ML
Tushar Swamy
Alexander Rucker
M. Shahbaz
Ishan Gaur
K. Olukotun
21
82
0
12 Feb 2020
REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for Object
  Detection on FPGAs
REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for Object Detection on FPGAs
Caiwen Ding
Shuo Wang
Ning Liu
Kaidi Xu
Yanzhi Wang
Yun Liang
MQ
16
89
0
29 Sep 2019
Serving Recurrent Neural Networks Efficiently with a Spatial Accelerator
Serving Recurrent Neural Networks Efficiently with a Spatial Accelerator
Tian Zhao
Yaqi Zhang
K. Olukotun
22
16
0
26 Sep 2019
E-RNN: Design Optimization for Efficient Recurrent Neural Networks in
  FPGAs
E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGAs
Zhe Li
Caiwen Ding
Siyue Wang
Wujie Wen
Youwei Zhuo
...
Qinru Qiu
Wenyao Xu
X. Lin
Xuehai Qian
Yanzhi Wang
MQ
12
64
0
12 Dec 2018
Pre-Defined Sparse Neural Networks with Hardware Acceleration
Pre-Defined Sparse Neural Networks with Hardware Acceleration
Sourya Dey
Kuan-Wen Huang
P. Beerel
K. Chugg
41
24
0
04 Dec 2018
FINN-L: Library Extensions and Design Trade-off Analysis for Variable
  Precision LSTM Networks on FPGAs
FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Vladimir Rybalkin
Alessandro Pappalardo
M. M. Ghaffar
Giulio Gambardella
Norbert Wehn
Michaela Blott
11
72
0
11 Jul 2018
Efficient Recurrent Neural Networks using Structured Matrices in FPGAs
Efficient Recurrent Neural Networks using Structured Matrices in FPGAs
Zhe Li
Shuo Wang
Caiwen Ding
Qinru Qiu
Yanzhi Wang
Yun Liang
GNN
14
21
0
20 Mar 2018
1