Systematic generalization remains challenging for current language models, which are known to be both sensitive to semantically similar permutations of the input and to struggle with known concepts presented in novel contexts. Although benchmarks exist for assessing compositional behavior, it is unclear how to measure the difficulty of a systematic generalization problem. In this work, we show how one aspect of systematic generalization can be described by the entropy of the distribution of component parts in the training data. We formalize a framework for measuring entropy in a sequence-to-sequence task and find that the performance of popular model architectures scales with the entropy. Our work connects systematic generalization to information efficiency, and our results indicate that success at high entropy can be achieved even without built-in priors, and that success at low entropy can serve as a target for assessing progress towards robust systematic generalization.
View on arXiv@article{wold2025_2505.13089, title={ Systematic Generalization in Language Models Scales with Information Entropy }, author={ Sondre Wold and Lucas Georges Gabriel Charpentier and Étienne Simon }, journal={arXiv preprint arXiv:2505.13089}, year={ 2025 } }