50
1

Towards Fine-Grained Video Question Answering

Abstract

In the rapidly evolving domain of video understanding, Video Question Answering (VideoQA) remains a focal point. However, existing datasets exhibit gaps in temporal and spatial granularity, which consequently limits the capabilities of existing VideoQA methods. This paper introduces the Multi-Object Multi-Actor Question Answering (MOMA-QA) dataset, which is designed to address these shortcomings by emphasizing temporal localization, spatial relationship reasoning, and entity-centric queries. With ground truth scene graphs and temporal interval annotations, MOMA-QA is ideal for developing models for fine-grained video understanding. Furthermore, we present a novel video-language model, SGVLM, which incorporates a scene graph predictor, an efficient frame retriever, and a pre-trained large language model for temporal localization and fine-grained relationship understanding. Evaluations on MOMA-QA and other public datasets demonstrate the superior performance of our model, setting new benchmarks for VideoQA.

View on arXiv
@article{dai2025_2503.06820,
  title={ Towards Fine-Grained Video Question Answering },
  author={ Wei Dai and Alan Luo and Zane Durante and Debadutta Dash and Arnold Milstein and Kevin Schulman and Ehsan Adeli and Li Fei-Fei },
  journal={arXiv preprint arXiv:2503.06820},
  year={ 2025 }
}
Comments on this paper