ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08640
63
0

Efficient Many-Shot In-Context Learning with Dynamic Block-Sparse Attention

11 March 2025
Emily Xiao
Chin-Jou Li
Yilin Zhang
Graham Neubig
Amanda Bertsch
    BDL
ArXivPDFHTML
Abstract

Many-shot in-context learning has recently shown promise as an alternative to finetuning, with the major advantage that the same model can be served for multiple tasks. However, this shifts the computational burden from training-time to inference-time, making deployment of many-shot ICL challenging to justify in-practice. This cost is further increased if a custom demonstration set is retrieved for each inference example. We present Dynamic Block-Sparse Attention, a training-free framework for retrieval-based many-shot in-context learning. By combining carefully designed block-sparse attention and retrieval of cached groups of demonstrations, we achieve comparable per-example latency to finetuning while maintaining on average >95% of the best method's accuracy across strong ICL and finetuning baselines. We hope that this will further enable the deployment of many-shot ICL at scale.

View on arXiv
@article{xiao2025_2503.08640,
  title={ Efficient Many-Shot In-Context Learning with Dynamic Block-Sparse Attention },
  author={ Emily Xiao and Chin-Jou Li and Yilin Zhang and Graham Neubig and Amanda Bertsch },
  journal={arXiv preprint arXiv:2503.08640},
  year={ 2025 }
}
Comments on this paper