ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.15628
92
0

Can Input Attributions Interpret the Inductive Reasoning Process Elicited in In-Context Learning?

20 December 2024
Mengyu Ye
Tatsuki Kuribayashi
Goro Kobayashi
Jun Suzuki
    LRM
ArXivPDFHTML
Abstract

Interpreting the internal process of neural models has long been a challenge. This challenge remains relevant in the era of large language models (LLMs) and in-context learning (ICL); for example, ICL poses a new issue of interpreting which example in the few-shot examples contributed to identifying/solving the task. To this end, in this paper, we design synthetic diagnostic tasks of inductive reasoning, inspired by the generalization tests in linguistics; here, most in-context examples are ambiguous w.r.t. their underlying rule, and one critical example disambiguates the task demonstrated. The question is whether conventional input attribution (IA) methods can track such a reasoning process, i.e., identify the influential example, in ICL. Our experiments provide several practical findings; for example, a certain simple IA method works the best, and the larger the model, the generally harder it is to interpret the ICL with gradient-based IA methods.

View on arXiv
@article{ye2025_2412.15628,
  title={ Can Input Attributions Interpret the Inductive Reasoning Process Elicited in In-Context Learning? },
  author={ Mengyu Ye and Tatsuki Kuribayashi and Goro Kobayashi and Jun Suzuki },
  journal={arXiv preprint arXiv:2412.15628},
  year={ 2025 }
}
Comments on this paper