Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2303.14882
Cited By
TransCODE: Co-design of Transformers and Accelerators for Efficient Training and Inference
27 March 2023
Shikhar Tuli
N. Jha
Re-assign community
ArXiv
PDF
HTML
Papers citing
"TransCODE: Co-design of Transformers and Accelerators for Efficient Training and Inference"
4 / 4 papers shown
Title
A Heterogeneous Chiplet Architecture for Accelerating End-to-End Transformer Models
Harsh Sharma
Pratyush Dhingra
J. Doppa
Ümit Y. Ogras
P. Pande
24
7
0
18 Dec 2023
Energon: Towards Efficient Acceleration of Transformers Using Dynamic Sparse Attention
Zhe Zhou
Junling Liu
Zhenyu Gu
Guangyu Sun
56
39
0
18 Oct 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
214
7,687
0
17 Aug 2015
1