ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23308
52
0

Spoken question answering for visual queries

29 May 2025
Nimrod Shabtay
Zvi Kons
Avihu Dekel
Hagai Aronowitz
R. Hoory
Assaf Arbelle
ArXiv (abs)PDFHTML
Main:4 Pages
1 Figures
Bibliography:1 Pages
4 Tables
Appendix:1 Pages
Abstract

Question answering (QA) systems are designed to answer natural language questions. Visual QA (VQA) and Spoken QA (SQA) systems extend the textual QA system to accept visual and spoken input respectively.This work aims to create a system that enables user interaction through both speech and images. That is achieved through the fusion of text, speech, and image modalities to tackle the task of spoken VQA (SVQA). The resulting multi-modal model has textual, visual, and spoken inputs and can answer spoken questions on images.Training and evaluating SVQA models requires a dataset for all three modalities, but no such dataset currently exists. We address this problem by synthesizing VQA datasets using two zero-shot TTS models. Our initial findings indicate that a model trained only with synthesized speech nearly reaches the performance of the upper-bounding model trained on textual QAs. In addition, we show that the choice of the TTS model has a minor impact on accuracy.

View on arXiv
@article{shabtay2025_2505.23308,
  title={ Spoken question answering for visual queries },
  author={ Nimrod Shabtay and Zvi Kons and Avihu Dekel and Hagai Aronowitz and Ron Hoory and Assaf Arbelle },
  journal={arXiv preprint arXiv:2505.23308},
  year={ 2025 }
}
Comments on this paper