ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.19999
52
0
v1v2 (latest)

Linear-Time Demonstration Selection for In-Context Learning via Gradient Estimation

27 August 2025
Ziniu Zhang
Zhenshuo Zhang
Dongyue Li
Lu Wang
Jennifer Dy
Hongyang R. Zhang
ArXiv (abs)PDFHTMLGithub
Main:9 Pages
5 Figures
Bibliography:3 Pages
12 Tables
Appendix:7 Pages
Abstract

This paper introduces an algorithm to select demonstration examples for in-context learning of a query set. Given a set of nnn examples, how can we quickly select kkk out of nnn to best serve as the conditioning for downstream inference? This problem has broad applications in prompt tuning and chain-of-thought reasoning. Since model weights remain fixed during in-context learning, previous work has sought to design methods based on the similarity of token embeddings. This work proposes a new approach based on gradients of the output taken in the input embedding space. Our approach estimates model outputs through a first-order approximation using the gradients. Then, we apply this estimation to multiple randomly sampled subsets. Finally, we aggregate the sampled subset outcomes to form an influence score for each demonstration, and select kkk most relevant examples. This procedure only requires pre-computing model outputs and gradients once, resulting in a linear-time algorithm relative to model and training set sizes. Extensive experiments across various models and datasets validate the efficiency of our approach. We show that the gradient estimation procedure yields approximations of full inference with less than 1%{1}\%1% error across six datasets. This allows us to scale up subset selection that would otherwise run full inference by up to 37.7×{37.7}\times37.7× on models with up to 343434 billion parameters, and outperform existing selection methods based on input embeddings by 11%{11}\%11% on average.

View on arXiv
Comments on this paper