ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.14815
23
2

Adapting Multilingual LLMs to Low-Resource Languages using Continued Pre-training and Synthetic Corpus

18 October 2024
Raviraj Joshi
Kanishk Singla
Anusha Kamath
Raunak Kalani
Rakesh Paul
Utkarsh Vaidya
Sanjay Singh Chauhan
Niranjan Wartikar
Eileen Long
    SyDa
    CLL
ArXivPDFHTML
Abstract

Multilingual LLMs support a variety of languages; however, their performance is suboptimal for low-resource languages. In this work, we emphasize the importance of continued pre-training of multilingual LLMs and the use of translation-based synthetic pre-training corpora for improving LLMs in low-resource languages. We conduct our study in the context of the low-resource Indic language Hindi. We introduce Nemotron-Mini-Hindi 4B, a bilingual SLM supporting both Hindi and English, based on Nemotron-Mini 4B. The model is trained using a mix of real and synthetic Hindi + English tokens, with continuous pre-training performed on 400B tokens. We demonstrate that both the base and instruct models achieve state-of-the-art results on Hindi benchmarks while remaining competitive on English tasks. Additionally, we observe that the continued pre-training approach enhances the model's overall factual accuracy. We perform an ablation study to highlight the impact of Hindi pre-training, showing significant improvements in Hindi chat capabilities and factual accuracy, which cannot be achieved through Hindi alignment alone.

View on arXiv
@article{joshi2025_2410.14815,
  title={ Adapting Multilingual LLMs to Low-Resource Languages using Continued Pre-training and Synthetic Corpus },
  author={ Raviraj Joshi and Kanishk Singla and Anusha Kamath and Raunak Kalani and Rakesh Paul and Utkarsh Vaidya and Sanjay Singh Chauhan and Niranjan Wartikar and Eileen Long },
  journal={arXiv preprint arXiv:2410.14815},
  year={ 2025 }
}
Comments on this paper