ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.04994
39
1

Rethinking Invariance in In-context Learning

8 May 2025
Lizhe Fang
Yifei Wang
Khashayar Gatmiry
Lei Fang
Y. Wang
ArXivPDFHTML
Abstract

In-Context Learning (ICL) has emerged as a pivotal capability of auto-regressive large language models, yet it is hindered by a notable sensitivity to the ordering of context examples regardless of their mutual independence. To address this issue, recent studies have introduced several variant algorithms of ICL that achieve permutation invariance. However, many of these do not exhibit comparable performance with the standard auto-regressive ICL algorithm. In this work, we identify two crucial elements in the design of an invariant ICL algorithm: information non-leakage and context interdependence, which are not simultaneously achieved by any of the existing methods. These investigations lead us to the proposed Invariant ICL (InvICL), a methodology designed to achieve invariance in ICL while ensuring the two properties. Empirically, our findings reveal that InvICL surpasses previous models, both invariant and non-invariant, in most benchmark datasets, showcasing superior generalization capabilities across varying input lengths. Code is available atthis https URL.

View on arXiv
@article{fang2025_2505.04994,
  title={ Rethinking Invariance in In-context Learning },
  author={ Lizhe Fang and Yifei Wang and Khashayar Gatmiry and Lei Fang and Yisen Wang },
  journal={arXiv preprint arXiv:2505.04994},
  year={ 2025 }
}
Comments on this paper