ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.02800
  4. Cited By
Hardware Acceleration of Fully Quantized BERT for Efficient Natural
  Language Processing

Hardware Acceleration of Fully Quantized BERT for Efficient Natural Language Processing

4 March 2021
Zejian Liu
Gang Li
Jian Cheng
    MQ
ArXiv (abs)PDFHTML

Papers citing "Hardware Acceleration of Fully Quantized BERT for Efficient Natural Language Processing"

17 / 17 papers shown
Title
COBRA: Algorithm-Architecture Co-optimized Binary Transformer Accelerator for Edge Inference
COBRA: Algorithm-Architecture Co-optimized Binary Transformer Accelerator for Edge Inference
Ye Qiao
Zhiheng Cheng
Yian Wang
Yifan Zhang
Yunzhe Deng
Sitao Huang
218
0
0
22 Apr 2025
HG-PIPE: Vision Transformer Acceleration with Hybrid-Grained Pipeline
HG-PIPE: Vision Transformer Acceleration with Hybrid-Grained Pipeline
Qingyu Guo
Jiayong Wan
Songqiang Xu
Meng Li
Yuan Wang
93
3
0
25 Jul 2024
Co-Designing Binarized Transformer and Hardware Accelerator for
  Efficient End-to-End Edge Deployment
Co-Designing Binarized Transformer and Hardware Accelerator for Efficient End-to-End Edge Deployment
Yuhao Ji
Chao Fang
Shaobo Ma
Haikuo Shao
Zhongfeng Wang
MQ
83
1
0
16 Jul 2024
P$^2$-ViT: Power-of-Two Post-Training Quantization and Acceleration for
  Fully Quantized Vision Transformer
P2^22-ViT: Power-of-Two Post-Training Quantization and Acceleration for Fully Quantized Vision Transformer
Huihong Shi
Xin Cheng
Wendong Mao
Zhongfeng Wang
MQ
83
6
0
30 May 2024
A Survey on Transformers in NLP with Focus on Efficiency
A Survey on Transformers in NLP with Focus on Efficiency
Wazib Ansar
Saptarsi Goswami
Amlan Chakrabarti
MedIm
93
2
0
15 May 2024
BETA: Binarized Energy-Efficient Transformer Accelerator at the Edge
BETA: Binarized Energy-Efficient Transformer Accelerator at the Edge
Yuhao Ji
Chao Fang
Zhongfeng Wang
66
3
0
22 Jan 2024
Understanding the Potential of FPGA-Based Spatial Acceleration for Large
  Language Model Inference
Understanding the Potential of FPGA-Based Spatial Acceleration for Large Language Model Inference
Hongzheng Chen
Jiahao Zhang
Yixiao Du
Shaojie Xiang
Zichao Yue
Niansong Zhang
Yaohui Cai
Zhiru Zhang
117
40
0
23 Dec 2023
A Survey of Techniques for Optimizing Transformer Inference
A Survey of Techniques for Optimizing Transformer Inference
Krishna Teja Chitty-Venkata
Sparsh Mittal
M. Emani
V. Vishwanath
Arun Somani
127
75
0
16 Jul 2023
SwiftTron: An Efficient Hardware Accelerator for Quantized Transformers
SwiftTron: An Efficient Hardware Accelerator for Quantized Transformers
Alberto Marchisio
David Durà
Maurizio Capra
Maurizio Martina
Guido Masera
Mohamed Bennai
91
22
0
08 Apr 2023
Blockwise Compression of Transformer-based Models without Retraining
Blockwise Compression of Transformer-based Models without Retraining
Gaochen Dong
W. Chen
33
3
0
04 Apr 2023
Block-wise Bit-Compression of Transformer-based Models
Gaochen Dong
W. Chen
132
0
0
16 Mar 2023
ViTA: A Vision Transformer Inference Accelerator for Edge Applications
ViTA: A Vision Transformer Inference Accelerator for Edge Applications
Shashank Nag
Gourav Datta
Souvik Kundu
N. Chandrachoodan
Peter A. Beerel
ViT
43
28
0
17 Feb 2023
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
156
114
0
31 Aug 2022
Federated Split BERT for Heterogeneous Text Classification
Federated Split BERT for Heterogeneous Text Classification
Zhengyang Li
Shijing Si
Jianzong Wang
Jing Xiao
FedML
91
21
0
26 May 2022
VAQF: Fully Automatic Software-Hardware Co-Design Framework for Low-Bit
  Vision Transformer
VAQF: Fully Automatic Software-Hardware Co-Design Framework for Low-Bit Vision Transformer
Mengshu Sun
Haoyu Ma
Guoliang Kang
Yi Ding
Tianlong Chen
Xiaolong Ma
Zhangyang Wang
Yanzhi Wang
ViT
112
47
0
17 Jan 2022
Vis-TOP: Visual Transformer Overlay Processor
Vis-TOP: Visual Transformer Overlay Processor
Wei Hu
Dian Xu
Zimeng Fan
Fang Liu
Yanxiang He
BDLViT
127
5
0
21 Oct 2021
Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language
  Models
Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language Models
Xin Zhou
Ruotian Ma
Tao Gui
Y. Tan
Qi Zhang
Xuanjing Huang
VLM
68
5
0
14 Oct 2021
1