ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.14951
  4. Cited By
In-Context Learning of a Linear Transformer Block: Benefits of the MLP
  Component and One-Step GD Initialization

In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization

22 February 2024
Ruiqi Zhang
Jingfeng Wu
Peter L. Bartlett
ArXivPDFHTML

Papers citing "In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization"

4 / 4 papers shown
Title
Transformers Handle Endogeneity in In-Context Linear Regression
Transformers Handle Endogeneity in In-Context Linear Regression
Haodong Liang
Krishnakumar Balasubramanian
Lifeng Lai
32
1
0
02 Oct 2024
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear
  Regression?
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?
Jingfeng Wu
Difan Zou
Zixiang Chen
Vladimir Braverman
Quanquan Gu
Peter L. Bartlett
116
48
0
12 Oct 2023
Meta-learning via Language Model In-context Tuning
Meta-learning via Language Model In-context Tuning
Yanda Chen
Ruiqi Zhong
Sheng Zha
George Karypis
He He
218
155
0
15 Oct 2021
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
243
11,568
0
09 Mar 2017
1