ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.07106
  4. Cited By
E-RNN: Design Optimization for Efficient Recurrent Neural Networks in
  FPGAs

E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGAs

12 December 2018
Zhe Li
Caiwen Ding
Siyue Wang
Wujie Wen
Youwei Zhuo
Chang Liu
Qinru Qiu
Wenyao Xu
X. Lin
Xuehai Qian
Yanzhi Wang
    MQ
ArXivPDFHTML

Papers citing "E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGAs"

9 / 9 papers shown
Title
Ditto: Accelerating Diffusion Model via Temporal Value Similarity
Ditto: Accelerating Diffusion Model via Temporal Value Similarity
Sungbin Kim
Hyunwuk Lee
Wonho Cho
Mincheol Park
Won Woo Ro
58
1
0
20 Jan 2025
DPD-NeuralEngine: A 22-nm 6.6-TOPS/W/mm$^2$ Recurrent Neural Network Accelerator for Wideband Power Amplifier Digital Pre-Distortion
DPD-NeuralEngine: A 22-nm 6.6-TOPS/W/mm2^22 Recurrent Neural Network Accelerator for Wideband Power Amplifier Digital Pre-Distortion
Ang Li
Haolin Wu
Yizhuo Wu
Qinyu Chen
Leo C. N. de Vreede
Chang Gao
21
0
0
15 Oct 2024
Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and
  Algorithm Co-design
Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design
Hongxiang Fan
Thomas C. P. Chau
Stylianos I. Venieris
Royson Lee
Alexandros Kouris
Wayne Luk
Nicholas D. Lane
Mohamed S. Abdelfattah
34
56
0
20 Sep 2022
Building Your Own Trusted Execution Environments Using FPGA
Building Your Own Trusted Execution Environments Using FPGA
Md. Armanuzzaman
A. Sadeghi
Ziming Zhao
9
10
0
08 Mar 2022
Training Recurrent Neural Networks by Sequential Least Squares and the
  Alternating Direction Method of Multipliers
Training Recurrent Neural Networks by Sequential Least Squares and the Alternating Direction Method of Multipliers
Alberto Bemporad
17
11
0
31 Dec 2021
Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting
  Spatio-Temporal Sparsity
Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-Temporal Sparsity
Chang Gao
T. Delbruck
Shih-Chii Liu
14
44
0
04 Aug 2021
MOHAQ: Multi-Objective Hardware-Aware Quantization of Recurrent Neural
  Networks
MOHAQ: Multi-Objective Hardware-Aware Quantization of Recurrent Neural Networks
Nesma M. Rezk
Tomas Nordstrom
D. Stathis
Z. Ul-Abdin
E. Aksoy
A. Hemani
MQ
20
1
0
02 Aug 2021
CSB-RNN: A Faster-than-Realtime RNN Acceleration Framework with
  Compressed Structured Blocks
CSB-RNN: A Faster-than-Realtime RNN Acceleration Framework with Compressed Structured Blocks
Runbin Shi
Peiyan Dong
Tong Geng
Yuhao Ding
Xiaolong Ma
Hayden Kwok-Hay So
Martin C. Herbordt
Ang Li
Yanzhi Wang
MQ
10
13
0
11 May 2020
Approximate LSTMs for Time-Constrained Inference: Enabling Fast Reaction
  in Self-Driving Cars
Approximate LSTMs for Time-Constrained Inference: Enabling Fast Reaction in Self-Driving Cars
Alexandros Kouris
Stylianos I. Venieris
Michail Rizakis
C. Bouganis
AI4TS
8
12
0
02 May 2019
1