Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.14951
Cited By
In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization
22 February 2024
Ruiqi Zhang
Jingfeng Wu
Peter L. Bartlett
Re-assign community
ArXiv
PDF
HTML
Papers citing
"In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization"
4 / 4 papers shown
Title
Transformers Handle Endogeneity in In-Context Linear Regression
Haodong Liang
Krishnakumar Balasubramanian
Lifeng Lai
32
1
0
02 Oct 2024
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?
Jingfeng Wu
Difan Zou
Zixiang Chen
Vladimir Braverman
Quanquan Gu
Peter L. Bartlett
118
48
0
12 Oct 2023
Meta-learning via Language Model In-context Tuning
Yanda Chen
Ruiqi Zhong
Sheng Zha
George Karypis
He He
218
155
0
15 Oct 2021
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
243
11,568
0
09 Mar 2017
1