10
0

MuRating: A High Quality Data Selecting Approach to Multilingual Large Language Model Pretraining

Zhixun Chen
Ping Guo
Wenhan Han
Yifan Zhang
Binbin Liu
Haobin Lin
Fengze Liu
Yan Zhao
Bingni Zhang
Taifeng Wang
Yin Zheng
Meng Fang
Main:3 Pages
16 Figures
16 Tables
Appendix:20 Pages
Abstract

Data quality is a critical driver of large language model performance, yet existing model-based selection methods focus almost exclusively on English. We introduce MuRating, a scalable framework that transfers high-quality English data-quality signals into a single rater for 17 target languages. MuRating aggregates multiple English "raters" via pairwise comparisons to learn unified document-quality scores,then projects these judgments through translation to train a multilingual evaluator on monolingual, cross-lingual, and parallel text pairs. Applied to web data, MuRating selects balanced subsets of English and multilingual content to pretrain a 1.2 B-parameter LLaMA model. Compared to strong baselines, including QuRater, AskLLM, DCLM and so on, our approach boosts average accuracy on both English benchmarks and multilingual evaluations, with especially large gains on knowledge-intensive tasks. We further analyze translation fidelity, selection biases, and underrepresentation of narrative material, outlining directions for future work.

View on arXiv
@article{chen2025_2507.01785,
  title={ MuRating: A High Quality Data Selecting Approach to Multilingual Large Language Model Pretraining },
  author={ Zhixun Chen and Ping Guo and Wenhan Han and Yifan Zhang and Binbin Liu and Haobin Lin and Fengze Liu and Yan Zhao and Bingni Zhang and Taifeng Wang and Yin Zheng and Meng Fang },
  journal={arXiv preprint arXiv:2507.01785},
  year={ 2025 }
}
Comments on this paper