Llama-3-Nanda-10B-Chat: An Open Generative Large Language Model for Hindi

Developing high-quality large language models (LLMs) for moderately resourced languages presents unique challenges in data availability, model adaptation, and evaluation. We introduce Llama-3-Nanda-10B-Chat, or Nanda for short, a state-of-the-art Hindi-centric instruction-tuned generative LLM, designed to push the boundaries of open-source Hindi language models. Built upon Llama-3-8B, Nanda incorporates continuous pre-training with expanded transformer blocks, leveraging the Llama Pro methodology. A key challenge was the limited availability of high-quality Hindi text data; we addressed this through rigorous data curation, augmentation, and strategic bilingual training, balancing Hindi and English corpora to optimize cross-linguistic knowledge transfer. With 10 billion parameters, Nanda stands among the top-performing open-source Hindi and multilingual models of similar scale, demonstrating significant advantages over many existing models. We provide an in-depth discussion of training strategies, fine-tuning techniques, safety alignment, and evaluation metrics, demonstrating how these approaches enabled Nanda to achieve state-of-the-art results. By open-sourcing Nanda, we aim to advance research in Hindi LLMs and support a wide range of real-world applications across academia, industry, and public services.
View on arXiv@article{choudhury2025_2504.06011, title={ Llama-3-Nanda-10B-Chat: An Open Generative Large Language Model for Hindi }, author={ Monojit Choudhury and Shivam Chauhan and Rocktim Jyoti Das and Dhruv Sahnan and Xudong Han and Haonan Li and Aaryamonvikram Singh and Alok Anil Jadhav and Utkarsh Agarwal and Mukund Choudhary and Debopriyo Banerjee and Fajri Koto and Junaid Bhat and Awantika Shukla and Samujjwal Ghosh and Samta Kamboj and Onkar Pandit and Lalit Pradhan and Rahul Pal and Sunil Sahu and Soundar Doraiswamy and Parvez Mullah and Ali El Filali and Neha Sengupta and Gokul Ramakrishnan and Rituraj Joshi and Gurpreet Gosal and Avraham Sheinin and Natalia Vassilieva and Preslav Nakov }, journal={arXiv preprint arXiv:2504.06011}, year={ 2025 } }