ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.09240
  4. Cited By
Task Vectors in In-Context Learning: Emergence, Formation, and Benefit

Task Vectors in In-Context Learning: Emergence, Formation, and Benefit

17 January 2025
Liu Yang
Ziqian Lin
Kangwook Lee
Dimitris Papailiopoulos
Robert D. Nowak
ArXiv (abs)PDFHTML

Papers citing "Task Vectors in In-Context Learning: Emergence, Formation, and Benefit"

9 / 9 papers shown
Title
Just-in-time and distributed task representations in language models
Just-in-time and distributed task representations in language models
Yuxuan Li
Declan Campbell
Stephanie Chan
Andrew Kyle Lampinen
12
0
0
28 Aug 2025
Look the Other Way: Designing 'Positive' Molecules with Negative Data via Task Arithmetic
Look the Other Way: Designing 'Positive' Molecules with Negative Data via Task Arithmetic
Rıza Özçelik
Sarah de Ruiter
F. Grisoni
39
0
0
23 Jul 2025
Next-Token Prediction Should be Ambiguity-Sensitive: A Meta-Learning Perspective
Next-Token Prediction Should be Ambiguity-Sensitive: A Meta-Learning Perspective
Léo Gagnon
Eric Elmoznino
Sarthak Mittal
Tom Marty
Tejas Kasetty
Dhanya Sridhar
Guillaume Lajoie
73
0
0
19 Jun 2025
Understanding Task Vectors in In-Context Learning: Emergence, Functionality, and Limitations
Yuxin Dong
Jiachen Jiang
Zhihui Zhu
Xia Ning
76
0
0
10 Jun 2025
Adaptive Task Vectors for Large Language Models
Adaptive Task Vectors for Large Language Models
Joonseong Kang
Soojeong Lee
Subeen Park
Sumin Park
Taero Kim
Jihee Kim
Ryunyi Lee
Kyungwoo Song
102
0
0
03 Jun 2025
Text Generation Beyond Discrete Token Sampling
Text Generation Beyond Discrete Token Sampling
Yufan Zhuang
Liyuan Liu
Chandan Singh
Jingbo Shang
Jianfeng Gao
OOD
298
3
0
20 May 2025
Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs
Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs
Zhipeng Yang
Junzhuo Li
Siyu Xia
Xuming Hu
AIFinLRM
183
2
0
20 May 2025
Representation Engineering for Large-Language Models: Survey and Research Challenges
Representation Engineering for Large-Language Models: Survey and Research Challenges
Lukasz Bartoszcze
Sarthak Munshi
Bryan Sukidi
Jennifer Yen
Zejia Yang
David Williams-King
Linh Le
Kosi Asuzu
Carsten Maple
244
2
0
24 Feb 2025
Does learning the right latent variables necessarily improve in-context learning?
Does learning the right latent variables necessarily improve in-context learning?
Sarthak Mittal
Eric Elmoznino
Léo Gagnon
Sangnie Bhardwaj
Tom Marty
Dhanya Sridhar
Guillaume Lajoie
160
7
0
29 May 2024
1