Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.01768
Cited By
A Quantitative Review on Language Model Efficiency Research
28 May 2023
Meng-Long Jiang
Hy Dang
Lingbo Tong
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Quantitative Review on Language Model Efficiency Research"
6 / 6 papers shown
Title
Liquid Structural State-Space Models
Ramin Hasani
Mathias Lechner
Tsun-Hsuan Wang
Makram Chahine
Alexander Amini
Daniela Rus
AI4TS
97
93
0
26 Sep 2022
H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences
Zhenhai Zhu
Radu Soricut
95
41
0
25 Jul 2021
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
249
1,982
0
28 Jul 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
238
578
0
12 Mar 2020
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
Sandeep Subramanian
Raymond Li
Jonathan Pilault
C. Pal
233
212
0
07 Sep 2019
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
196
1,358
0
06 Jun 2016
1