This paper presents a comprehensive overview of the data preparation pipeline developed for the OpenGPT-X project, a large-scale initiative aimed at creating open and high-performance multilingual large language models (LLMs). The project goal is to deliver models that cover all major European languages, with a particular focus on real-world applications within the European Union. We explain all data processing steps, starting with the data selection and requirement definition to the preparation of the final datasets for model training. We distinguish between curated data and web data, as each of these categories is handled by distinct pipelines, with curated data undergoing minimal filtering and web data requiring extensive filtering and deduplication. This distinction guided the development of specialized algorithmic solutions for both pipelines. In addition to describing the processing methodologies, we provide an in-depth analysis of the datasets, increasing transparency and alignment with European data regulations. Finally, we share key insights and challenges faced during the project, offering recommendations for future endeavors in large-scale multilingual data preparation for LLMs.
View on arXiv@article{brandizzi2025_2410.08800, title={ Data Processing for the OpenGPT-X Model Family }, author={ Nicolo' Brandizzi and Hammam Abdelwahab and Anirban Bhowmick and Lennard Helmer and Benny Jörg Stein and Pavel Denisov and Qasid Saleem and Michael Fromm and Mehdi Ali and Richard Rutmann and Farzad Naderi and Mohamad Saif Agy and Alexander Schwirjow and Fabian Küch and Luzian Hahn and Malte Ostendorff and Pedro Ortiz Suarez and Georg Rehm and Dennis Wegener and Nicolas Flores-Herr and Joachim Köhler and Johannes Leveling }, journal={arXiv preprint arXiv:2410.08800}, year={ 2025 } }