ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13089
96
0

Systematic Generalization in Language Models Scales with Information Entropy

19 May 2025
Sondre Wold
Lucas Georges Gabriel Charpentier
Étienne Simon
ArXivPDFHTML
Abstract

Systematic generalization remains challenging for current language models, which are known to be both sensitive to semantically similar permutations of the input and to struggle with known concepts presented in novel contexts. Although benchmarks exist for assessing compositional behavior, it is unclear how to measure the difficulty of a systematic generalization problem. In this work, we show how one aspect of systematic generalization can be described by the entropy of the distribution of component parts in the training data. We formalize a framework for measuring entropy in a sequence-to-sequence task and find that the performance of popular model architectures scales with the entropy. Our work connects systematic generalization to information efficiency, and our results indicate that success at high entropy can be achieved even without built-in priors, and that success at low entropy can serve as a target for assessing progress towards robust systematic generalization.

View on arXiv
@article{wold2025_2505.13089,
  title={ Systematic Generalization in Language Models Scales with Information Entropy },
  author={ Sondre Wold and Lucas Georges Gabriel Charpentier and Étienne Simon },
  journal={arXiv preprint arXiv:2505.13089},
  year={ 2025 }
}
Comments on this paper