307
v1v2 (latest)

U-shaped and Inverted-U Scaling behind Emergent Abilities of Large Language Models

International Conference on Learning Representations (ICLR), 2024
Main:11 Pages
35 Figures
Bibliography:3 Pages
5 Tables
Appendix:19 Pages
Abstract

Large language models (LLMs) have been shown to exhibit emergent abilities in some downstream tasks, where model performance stagnates at first and then improves sharply and unpredictably with scale beyond a threshold. In this work, we investigate the phenomenon by grouping questions based on difficulty level and provide a possible explanation for emergent abilities. Specifically, we observe U-shaped scaling for hard questions and inverted-U scaling followed by steady improvement for easy questions. The two scaling patterns initially offset each other, causing stagnant overall performance. The performance starts to soar when the scaling pattern of easy questions reverts from inverse to standard scaling, leading to emergent abilities. Based on this finding, we propose a simple yet effective pipeline, called Slice-and-Sandwich, to predict the emergence threshold and model performance beyond the threshold. Our code is publicly available atthis https URL.

View on arXiv
Comments on this paper