We're Calling an Intervention: Exploring Fundamental Hurdles in Adapting Language Models to Nonstandard Text

We present a suite of experiments that allow us to understand the underlying challenges of language model adaptation to nonstandard text. We do so by designing interventions that approximate core features of user-generated text and their interactions with existing biases of language models. Applying our interventions during language model adaptation to nonstandard text variations, we gain important insights into when such adaptation is successful, as well as the aspects of text variation and noise that are particularly difficult for language models to handle. For instance, on text with character-level variation, out-of-the-box performance improves even with a few additional training examples but approaches a plateau, suggesting that more data is not the solution. In contrast, on text with variation involving new words or meanings, far more data is needed, but it leads to a massive breakthrough in performance. Our findings reveal that existing models lack the necessary infrastructure to handle diverse forms of nonstandard text, guiding the development of more resilient language modeling techniques. We make the code for our interventions, which can be applied to any English text data, publicly available.
View on arXiv@article{srivastava2025_2404.07304, title={ We're Calling an Intervention: Exploring Fundamental Hurdles in Adapting Language Models to Nonstandard Text }, author={ Aarohi Srivastava and David Chiang }, journal={arXiv preprint arXiv:2404.07304}, year={ 2025 } }