ConLID: Supervised Contrastive Learning for Low-Resource Language Identification

Main:9 Pages
4 Figures
Bibliography:3 Pages
18 Tables
Appendix:3 Pages
Abstract
Language identification (LID) is a critical step in curating multilingual LLM pretraining corpora from web crawls. While many studies on LID model training focus on collecting diverse training data to improve performance, low-resource languages -- often limited to single-domain data, such as the Bible -- continue to perform poorly. To resolve these class imbalance and bias issues, we propose a novel supervised contrastive learning (SCL) approach to learn domain-invariant representations for low-resource languages. Through an extensive analysis, we show that our approach improves LID performance on out-of-domain data for low-resource languages by 3.2%, demonstrating its effectiveness in enhancing LID models.
View on arXiv@article{foroutan2025_2506.15304, title={ ConLID: Supervised Contrastive Learning for Low-Resource Language Identification }, author={ Negar Foroutan and Jakhongir Saydaliev and Ye Eun Kim and Antoine Bosselut }, journal={arXiv preprint arXiv:2506.15304}, year={ 2025 } }
Comments on this paper