Task Generalization With AutoRegressive Compositional Structure: Can Learning From Tasks Generalize to Tasks?
- LRM

Large language models (LLMs) exhibit remarkable task generalization, solving tasks they were never explicitly trained on with only a few demonstrations. This raises a fundamental question: When can learning from a small set of tasks generalize to a large task family? In this paper, we investigate task generalization through the lens of autoregressive compositional structure, where each task is a composition of operations, and each operation is among a finite family of subtasks. This yields a total class of size . We first show that generalization to all tasks is theoretically achievable by training on only tasks. Empirically, we demonstrate that Transformers achieve such exponential task generalization on sparse parity functions via In-context Learning (ICL) and chain-of-thought (CoT) reasoning. We further show generalization in arithmetic and translation, beyond parity functions.
View on arXiv