545
v1v2v3v4v5 (latest)

Can Input Attributions Explain Inductive Reasoning in In-Context Learning?

Annual Meeting of the Association for Computational Linguistics (ACL), 2024
Main:9 Pages
15 Figures
Bibliography:5 Pages
10 Tables
Appendix:13 Pages
Abstract

Interpreting the internal process of neural models has long been a challenge. This challenge remains relevant in the era of large language models (LLMs) and in-context learning (ICL); for example, ICL poses a new issue of interpreting which example in the few-shot examples contributed to identifying/solving the task. To this end, in this paper, we design synthetic diagnostic tasks of inductive reasoning, inspired by the generalization tests typically adopted in psycholinguistics. Here, most in-context examples are ambiguous w.r.t. their underlying rule, and one critical example disambiguates it. The question is whether conventional input attribution (IA) methods can track such a reasoning process, i.e., identify the influential example, in ICL. Our experiments provide several practical findings; for example, a certain simple IA method works the best, and the larger the model, the generally harder it is to interpret the ICL with gradient-based IA methods.

View on arXiv
Comments on this paper