Position: Enough of Scaling LLMs! Lets Focus on Downscaling

We challenge the dominant focus on neural scaling laws and advocate for a paradigm shift toward downscaling in the development of large language models (LLMs). While scaling laws have provided critical insights into performance improvements through increasing model and dataset size, we emphasize the significant limitations of this approach, particularly in terms of computational inefficiency, environmental impact, and deployment constraints. To address these challenges, we propose a holistic framework for downscaling LLMs that seeks to maintain performance while drastically reducing resource demands. This paper outlines practical strategies for transitioning away from traditional scaling paradigms, advocating for a more sustainable, efficient, and accessible approach to LLM development.
View on arXiv@article{sengupta2025_2505.00985, title={ Position: Enough of Scaling LLMs! Lets Focus on Downscaling }, author={ Ayan Sengupta and Yash Goel and Tanmoy Chakraborty }, journal={arXiv preprint arXiv:2505.00985}, year={ 2025 } }