Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.13032
Cited By
Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
30 August 2021
Ran Tian
Joshua Maynez
Ankur P. Parikh
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning"
3 / 3 papers shown
Title
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,740
0
26 Sep 2016
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
196
1,367
0
06 Jun 2016
1