Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.14180
Cited By
Linear Transformers are Versatile In-Context Learners
21 February 2024
Max Vladymyrov
J. Oswald
Mark Sandler
Rong Ge
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Linear Transformers are Versatile In-Context Learners"
9 / 9 papers shown
Title
How Private is Your Attention? Bridging Privacy with In-Context Learning
Soham Bonnerjee
Zhen Wei
Yeon
Anna Asch
Sagnik Nandy
Promit Ghosal
40
0
0
22 Apr 2025
Contextualize-then-Aggregate: Circuits for In-Context Learning in Gemma-2 2B
Aleksandra Bakalova
Yana Veitsman
Xinting Huang
Michael Hahn
31
0
0
31 Mar 2025
Ask, and it shall be given: On the Turing completeness of prompting
Ruizhong Qiu
Zhe Xu
W. Bao
Hanghang Tong
ReLM
LRM
AI4CE
62
0
0
24 Feb 2025
Training Dynamics of In-Context Learning in Linear Attention
Yedi Zhang
Aaditya K. Singh
Peter E. Latham
Andrew Saxe
MLT
59
1
0
28 Jan 2025
XLand-100B: A Large-Scale Multi-Task Dataset for In-Context Reinforcement Learning
Alexander Nikulin
Ilya Zisman
Alexey Zemtsov
Viacheslav Sinii
102
4
0
13 Jun 2024
On Mesa-Optimization in Autoregressively Trained Transformers: Emergence and Capability
Chenyu Zheng
Wei Huang
Rongzheng Wang
Guoqiang Wu
Jun Zhu
Chongxuan Li
28
1
0
27 May 2024
From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples
Robert Vacareanu
Vlad-Andrei Negru
Vasile Suciu
Mihai Surdeanu
26
4
0
11 Apr 2024
Uncovering mesa-optimization algorithms in Transformers
J. Oswald
Eyvind Niklasson
Maximilian Schlegel
Seijin Kobayashi
Nicolas Zucchet
...
Mark Sandler
Blaise Agüera y Arcas
Max Vladymyrov
Razvan Pascanu
João Sacramento
13
53
0
11 Sep 2023
In-context Learning and Induction Heads
Catherine Olsson
Nelson Elhage
Neel Nanda
Nicholas Joseph
Nova Dassarma
...
Tom B. Brown
Jack Clark
Jared Kaplan
Sam McCandlish
C. Olah
240
453
0
24 Sep 2022
1