102
1

Chitrarth: Bridging Vision and Language for a Billion People

Abstract

Recent multimodal foundation models are primarily trained on English or high resource European language data, which hinders their applicability to other medium and low-resource languages. To address this limitation, we introduce Chitrarth (Chitra: Image; Artha: Meaning), an inclusive Vision-Language Model (VLM), specifically targeting the rich linguistic diversity and visual reasoning across 10 prominent Indian languages. Our model effectively integrates a state-of-the-art (SOTA) multilingual Large Language Model (LLM) with a vision module, primarily trained on multilingual image-text data. Furthermore, we also introduce BharatBench, a comprehensive framework for evaluating VLMs across various Indian languages, ultimately contributing to more diverse and effective AI systems. Our model achieves SOTA results for benchmarks across low resource languages while retaining its efficiency in English. Through our research, we aim to set new benchmarks in multilingual-multimodal capabilities, offering substantial improvements over existing models and establishing a foundation to facilitate future advancements in this arena.

View on arXiv
@article{khan2025_2502.15392,
  title={ Chitrarth: Bridging Vision and Language for a Billion People },
  author={ Shaharukh Khan and Ayush Tarun and Abhinav Ravi and Ali Faraz and Akshat Patidar and Praveen Kumar Pokala and Anagha Bhangare and Raja Kolla and Chandra Khatri and Shubham Agarwal },
  journal={arXiv preprint arXiv:2502.15392},
  year={ 2025 }
}
Comments on this paper