ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19163
69
0

SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs

25 May 2025
Firoj Alam
Md. Arid Hasan
Shammur A. Chowdhury
ArXiv (abs)PDFHTML
Main:3 Pages
5 Figures
Bibliography:2 Pages
6 Tables
Abstract

Large Language Models (LLMs) have demonstrated remarkable performance across various disciplines and tasks. However, benchmarking their capabilities with multilingual spoken queries remains largely unexplored. In this study, we introduce SpokenNativQA, the first multilingual and culturally aligned spoken question-answering (SQA) dataset designed to evaluate LLMs in real-world conversational settings. The dataset comprises approximately 33,000 naturally spoken questions and answers in multiple languages, including low-resource and dialect-rich languages, providing a robust benchmark for assessing LLM performance in speech-based interactions. SpokenNativQA addresses the limitations of text-based QA datasets by incorporating speech variability, accents, and linguistic diversity. We benchmark different ASR systems and LLMs for SQA and present our findings. We released the data at (this https URL) and the experimental scripts at (this https URL) for the research community.

View on arXiv
@article{alam2025_2505.19163,
  title={ SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs },
  author={ Firoj Alam and Md Arid Hasan and Shammur Absar Chowdhury },
  journal={arXiv preprint arXiv:2505.19163},
  year={ 2025 }
}
Comments on this paper