20
6

INDIC QA BENCHMARK: A Multilingual Benchmark to Evaluate Question Answering capability of LLMs for Indic Languages

A. Singh
Rudra Murthy
Vishwajeet Kumar
Jaydeep Sen
Ashish Mittal
Ganesh Ramakrishnan
Abstract

Large Language Models (LLMs) perform well on unseen tasks in English, but their abilities in non English languages are less explored due to limited benchmarks and training data. To bridge this gap, we introduce the Indic QA Benchmark, a large dataset for context grounded question answering in 11 major Indian languages, covering both extractive and abstractive tasks. Evaluations of multilingual LLMs, including instruction finetuned versions, revealed weak performance in low resource languages due to a strong English language bias in their training data. We also investigated the Translate Test paradigm,where inputs are translated to English for processing and the results are translated back into the source language for output. This approach outperformed multilingual LLMs, particularly in low resource settings. By releasing Indic QA, we aim to promote further research into LLMs question answering capabilities in low resource languages. This benchmark offers a critical resource to address existing limitations and foster multilingual understanding.

View on arXiv
@article{singh2025_2407.13522,
  title={ INDIC QA BENCHMARK: A Multilingual Benchmark to Evaluate Question Answering capability of LLMs for Indic Languages },
  author={ Abhishek Kumar Singh and Vishwajeet kumar and Rudra Murthy and Jaydeep Sen and Ashish Mittal and Ganesh Ramakrishnan },
  journal={arXiv preprint arXiv:2407.13522},
  year={ 2025 }
}
Comments on this paper