31
2

U-shaped and Inverted-U Scaling behind Emergent Abilities of Large Language Models

Abstract

Large language models (LLMs) have been shown to exhibit emergent abilities in some downstream tasks, where model performance stagnates at first and then improves sharply and unpredictably with scale beyond a threshold. In this work, we investigate the phenomenon by grouping questions based on difficulty level and provide a possible explanation for emergent abilities. Specifically, we observe U-shaped scaling for hard questions and inverted-U scaling followed by steady improvement for easy questions. The two scaling patterns initially offset each other, causing stagnant overall performance. The performance starts to soar when the scaling pattern of easy questions reverts from inverse to standard scaling, leading to emergent abilities. Based on this finding, we propose a simple yet effective pipeline, called Slice-and-Sandwich, to predict the emergence threshold and model performance beyond the threshold. Our code is publicly available atthis https URL.

View on arXiv
@article{wu2025_2410.01692,
  title={ U-shaped and Inverted-U Scaling behind Emergent Abilities of Large Language Models },
  author={ Tung-Yu Wu and Pei-Yu Lo },
  journal={arXiv preprint arXiv:2410.01692},
  year={ 2025 }
}
Comments on this paper