35
0

On the generalization of language models from in-context learning and finetuning: a controlled study

Abstract

Large language models exhibit exciting capabilities, yet can show surprisingly narrow generalization from finetuning. E.g. they can fail to generalize to simple reversals of relations they are trained on, or fail to make simple logical deductions based on trained information. These failures to generalize from fine-tuning can hinder practical application of these models. On the other hand, language models' in-context learning shows different inductive biases, and can generalize better in some cases. Here, we explore these differences in generalization between in-context- and fine-tuning-based learning. To do so, we constructed several novel datasets to evaluate and improve models' abilities to generalize from finetuning data. The datasets are designed to create clean tests of generalization, by isolating the knowledge in the dataset from that in pretraining. We expose pretrained large models to controlled subsets of the information in these datasets -- either in context, or through fine-tuning -- and evaluate their performance on test sets that require various types of generalization. We find overall that in data-matched settings, in-context learning can generalize more flexibly than fine-tuning (though we also find some qualifications of prior findings, such as cases when fine-tuning can generalize to reversals embedded in a larger structure of knowledge). We build on these findings to propose a method to enable improved generalization from fine-tuning: adding in-context inferences to finetuning data. We show that this method improves generalization across various splits of our datasets and other benchmarks. Our results have implications for understanding the inductive biases of different modes of learning in language models, and practically improving their performance.

View on arXiv
@article{lampinen2025_2505.00661,
  title={ On the generalization of language models from in-context learning and finetuning: a controlled study },
  author={ Andrew K. Lampinen and Arslan Chaudhry and Stephanie C.Y. Chan and Cody Wild and Diane Wan and Alex Ku and Jörg Bornschein and Razvan Pascanu and Murray Shanahan and James L. McClelland },
  journal={arXiv preprint arXiv:2505.00661},
  year={ 2025 }
}
Comments on this paper