Large Linguistic Models: Investigating LLMs' metalinguistic abilities

The performance of large language models (LLMs) has recently improved to the point where the models can perform well on many language tasks. We show here that -- for the first time -- the models can also generate valid metalinguistic analyses of language data. We outline a research program where the behavioral interpretability of LLMs on these tasks is tested via prompting. LLMs are trained primarily on text -- as such, evaluating their metalinguistic abilities improves our understanding of their general capabilities and sheds new light on theoretical models in linguistics. We show that OpenAI's o1 vastly outperforms other models on tasks involving drawing syntactic trees and phonological generalization. We speculate that OpenAI o1's unique advantage over other models may result from the model's chain-of-thought mechanism, which mimics the structure of human reasoning used in complex cognitive tasks, such as linguistic analysis.
View on arXiv@article{beguš2025_2305.00948, title={ Large Linguistic Models: Investigating LLMs' metalinguistic abilities }, author={ Gašper Beguš and Maksymilian Dąbkowski and Ryan Rhodes }, journal={arXiv preprint arXiv:2305.00948}, year={ 2025 } }