ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.06653
86
0

In-Context Learning (and Unlearning) of Length Biases

10 February 2025
S. Schoch
Yangfeng Ji
ArXivPDFHTML
Abstract

Large language models have demonstrated strong capabilities to learn in-context, where exemplar input-output pairings are appended to the prompt for demonstration. However, existing work has demonstrated the ability of models to learn lexical and label biases in-context, which negatively impacts both performance and robustness of models. The impact of other statistical data biases remains under-explored, which this work aims to address. We specifically investigate the impact of length biases on in-context learning. We demonstrate that models do learn length biases in the context window for their predictions, and further empirically analyze the factors that modulate the level of bias exhibited by the model. In addition, we show that learning length information in-context can be used to counter the length bias that has been encoded in models (e.g., via fine-tuning). This reveals the power of in-context learning in debiasing model prediction behaviors without the need for costly parameter updates.

View on arXiv
@article{schoch2025_2502.06653,
  title={ In-Context Learning (and Unlearning) of Length Biases },
  author={ Stephanie Schoch and Yangfeng Ji },
  journal={arXiv preprint arXiv:2502.06653},
  year={ 2025 }
}
Comments on this paper