Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.06741
Cited By
Why self-attention is Natural for Sequence-to-Sequence Problems? A Perspective from Symmetries
13 October 2022
Chao Ma
Lexing Ying
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Why self-attention is Natural for Sequence-to-Sequence Problems? A Perspective from Symmetries"
6 / 6 papers shown
Title
Your Transformer May Not be as Powerful as You Expect
Shengjie Luo
Shanda Li
Shuxin Zheng
Tie-Yan Liu
Liwei Wang
Di He
52
50
0
26 May 2022
Universal Approximation Under Constraints is Possible with Transformers
Anastasis Kratsios
Behnoosh Zamanlooy
Tianlin Liu
Ivan Dokmanić
48
26
0
07 Oct 2021
Learning with invariances in random features and kernel models
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
44
89
0
25 Feb 2021
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
C. Qi
Hao Su
Kaichun Mo
Leonidas J. Guibas
3DH
3DPC
3DV
PINN
219
14,047
0
02 Dec 2016
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
196
1,363
0
06 Jun 2016
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
214
7,915
0
17 Aug 2015
1