ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09641
21
1

TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning

13 April 2025
Xingjian Zhang
Siwei Wen
Wenjun Wu
Lei Huang
    LRM
ArXivPDFHTML
Abstract

Recently, improving the reasoning ability of large multimodal models (LMMs) through reinforcement learning has made great progress. However, most existing works are based on highly reasoning-intensive datasets such as mathematics and code, and researchers generally choose large-scale models as the foundation. We argue that exploring small-scale models' reasoning capabilities remains valuable for researchers with limited computational resources. Moreover, enabling models to explain their reasoning processes on general question-answering datasets is equally meaningful. Therefore, we present the small-scale video reasoning model TinyLLaVA-Video-R1. Based on TinyLLaVA-Video, a traceably trained video understanding model with no more than 4B parameters, it not only demonstrates significantly improved reasoning and thinking capabilities after using reinforcement learning on general Video-QA datasets, but also exhibits the emergent characteristic of "aha moments". Furthermore, we share a series of experimental findings, aiming to provide practical insights for future exploration of video reasoning (thinking) abilities in small-scale models. It is available atthis https URL.

View on arXiv
@article{zhang2025_2504.09641,
  title={ TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning },
  author={ Xingjian Zhang and Siwei Wen and Wenjun Wu and Lei Huang },
  journal={arXiv preprint arXiv:2504.09641},
  year={ 2025 }
}
Comments on this paper