What We Talk About When We Talk About LMs: Implicit Paradigm Shifts and the Ship of Language Models

The term Language Models (LMs) as a time-specific collection of models of interest is constantly reinvented, with its referents updated much like the replaces its parts but remains the same ship in essence. In this paper, we investigate this problem, wherein scientific evolution takes the form of continuous, implicit retrofits of key existing terms. We seek to initiate a novel perspective of scientific progress, in addition to the more well-studied emergence of new terms. To this end, we construct the data infrastructure based on recent NLP publications. Then, we perform a series of text-based analyses toward a detailed, quantitative understanding of the use of Language Models as a term of art. Our work highlights how systems and theories influence each other in scientific discourse, and we call for attention to the transformation of this Ship that we all are contributing to.
View on arXiv@article{zhu2025_2407.01929, title={ What We Talk About When We Talk About LMs: Implicit Paradigm Shifts and the Ship of Language Models }, author={ Shengqi Zhu and Jeffrey M. Rzeszotarski }, journal={arXiv preprint arXiv:2407.01929}, year={ 2025 } }