14
0

VideoMultiAgents: A Multi-Agent Framework for Video Question Answering

Abstract

Video Question Answering (VQA) inherently relies on multimodal reasoning, integrating visual, temporal, and linguistic cues to achieve a deeper understanding of video content. However, many existing methods rely on feeding frame-level captions into a single model, making it difficult to adequately capture temporal and interactive contexts. To address this limitation, we introduce VideoMultiAgents, a framework that integrates specialized agents for vision, scene graph analysis, and text processing. It enhances video understanding leveraging complementary multimodal reasoning from independently operating agents. Our approach is also supplemented with a question-guided caption generation, which produces captions that highlight objects, actions, and temporal transitions directly relevant to a given query, thus improving the answer accuracy. Experimental results demonstrate that our method achieves state-of-the-art performance on Intent-QA (79.0%, +6.2% over previous SOTA), EgoSchema subset (75.4%, +3.4%), and NExT-QA (79.6%, +0.4%). The source code is available atthis https URL.

View on arXiv
@article{kugo2025_2504.20091,
  title={ VideoMultiAgents: A Multi-Agent Framework for Video Question Answering },
  author={ Noriyuki Kugo and Xiang Li and Zixin Li and Ashish Gupta and Arpandeep Khatua and Nidhish Jain and Chaitanya Patel and Yuta Kyuragi and Yasunori Ishii and Masamoto Tanabiki and Kazuki Kozuka and Ehsan Adeli },
  journal={arXiv preprint arXiv:2504.20091},
  year={ 2025 }
}
Comments on this paper