12
0

Learning curves theory for hierarchically compositional data with power-law distributed features

Abstract

Recent theories suggest that Neural Scaling Laws arise whenever the task is linearly decomposed into power-law distributed units. Alternatively, scaling laws also emerge when data exhibit a hierarchically compositional structure, as is thought to occur in language and images. To unify these views, we consider classification and next-token prediction tasks based on probabilistic context-free grammars -- probabilistic models that generate data via a hierarchy of production rules. For classification, we show that having power-law distributed production rules results in a power-law learning curve with an exponent depending on the rules' distribution and a large multiplicative constant that depends on the hierarchical structure. By contrast, for next-token prediction, the distribution of production rules controls the local details of the learning curve, but not the exponent describing the large-scale behaviour.

View on arXiv
@article{cagnetta2025_2505.07067,
  title={ Learning curves theory for hierarchically compositional data with power-law distributed features },
  author={ Francesco Cagnetta and Hyunmo Kang and Matthieu Wyart },
  journal={arXiv preprint arXiv:2505.07067},
  year={ 2025 }
}
Comments on this paper