A Theoretical Perspective: How to Prevent Model Collapse in Self-consuming Training Loops

High-quality data is essential for training large generative models, yet the vast reservoir of real data available online has become nearly depleted. Consequently, models increasingly generate their own data for further training, forming Self-consuming Training Loops (STLs). However, the empirical results have been strikingly inconsistent: some models degrade or even collapse, while others successfully avoid these failures, leaving a significant gap in theoretical understanding to explain this discrepancy. This paper introduces the intriguing notion of recursive stability and presents the first theoretical generalization analysis, revealing how both model architecture and the proportion between real and synthetic data influence the success of STLs. We further extend this analysis to transformers in in-context learning, showing that even a constant-sized proportion of real data ensures convergence, while also providing insights into optimal synthetic data sizing.
View on arXiv@article{fu2025_2502.18865, title={ A Theoretical Perspective: How to Prevent Model Collapse in Self-consuming Training Loops }, author={ Shi Fu and Yingjie Wang and Yuzhu Chen and Xinmei Tian and Dacheng Tao }, journal={arXiv preprint arXiv:2502.18865}, year={ 2025 } }