229
v1v2v3 (latest)

In-Context Learning of Linear Systems: Generalization Theory and Applications to Operator Learning

Main:15 Pages
6 Figures
Bibliography:4 Pages
Appendix:17 Pages
Abstract

We study theoretical guarantees for solving linear systems in-context using a linear transformer architecture. For in-domain generalization, we provide neural scaling laws that bound the generalization error in terms of the number of tasks and sizes of samples used in training and inference. For out-of-domain generalization, we find that the behavior of trained transformers under task distribution shifts depends crucially on the distribution of the tasks seen during training. We introduce a novel notion of task diversity and show that it defines a necessary and sufficient condition for pre-trained transformers generalize under task distribution shifts. We also explore applications of learning linear systems in-context, such as to in-context operator learning for PDEs. Finally, we provide some numerical experiments to validate the established theory.

View on arXiv
Comments on this paper