Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.01367
Cited By
Transformers are Universal In-context Learners
2 August 2024
Takashi Furuya
Maarten V. de Hoop
Gabriel Peyré
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transformers are Universal In-context Learners"
6 / 6 papers shown
Title
DDEQs: Distributional Deep Equilibrium Models through Wasserstein Gradient Flows
Jonathan Geuter
Clément Bonet
Anna Korba
David Alvarez-Melis
56
0
0
03 Mar 2025
Towards Understanding the Universality of Transformers for Next-Token Prediction
Michael E. Sander
Gabriel Peyré
CML
29
0
0
03 Oct 2024
How do Transformers perform In-Context Autoregressive Learning?
Michael E. Sander
Raja Giryes
Taiji Suzuki
Mathieu Blondel
Gabriel Peyré
16
7
0
08 Feb 2024
Small Transformers Compute Universal Metric Embeddings
Anastasis Kratsios
Valentin Debarnot
Ivan Dokmanić
52
11
0
14 Sep 2022
Your Transformer May Not be as Powerful as You Expect
Shengjie Luo
Shanda Li
Shuxin Zheng
Tie-Yan Liu
Liwei Wang
Di He
52
50
0
26 May 2022
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
C. Qi
Hao Su
Kaichun Mo
Leonidas J. Guibas
3DH
3DPC
3DV
PINN
219
13,886
0
02 Dec 2016
1