ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10068
48
0

Mavors: Multi-granularity Video Representation for Multimodal Large Language Model

14 April 2025
Yang Shi
Jiaheng Liu
Yushuo Guan
Z. Wu
Y. Zhang
Z. Wang
Weihong Lin
Jingyun Hua
Z. Wang
Xinlong Chen
Bohan Zeng
W. Zhang
Fuzheng Zhang
Wenjing Yang
Di Zhang
    VGen
    VLM
ArXivPDFHTML
Abstract

Long-context video understanding in multimodal large language models (MLLMs) faces a critical challenge: balancing computational efficiency with the retention of fine-grained spatio-temporal patterns. Existing approaches (e.g., sparse sampling, dense sampling with low resolution, and token compression) suffer from significant information loss in temporal dynamics, spatial details, or subtle interactions, particularly in videos with complex motion or varying resolutions. To address this, we propose Mavors\mathbf{Mavors}Mavors, a novel framework that introduces M\mathbf{M}Multi-gra\mathbf{a}anularity v\mathbf{v}video\mathbf{o}o r\mathbf{r}repres\mathbf{s}sentation for holistic long-video modeling. Specifically, Mavors directly encodes raw video content into latent representations through two core components: 1) an Intra-chunk Vision Encoder (IVE) that preserves high-resolution spatial features via 3D convolutions and Vision Transformers, and 2) an Inter-chunk Feature Aggregator (IFA) that establishes temporal coherence across chunks using transformer-based dependency modeling with chunk-level rotary position encodings. Moreover, the framework unifies image and video understanding by treating images as single-frame videos via sub-image decomposition. Experiments across diverse benchmarks demonstrate Mavors' superiority in maintaining both spatial fidelity and temporal continuity, significantly outperforming existing methods in tasks requiring fine-grained spatio-temporal reasoning.

View on arXiv
@article{shi2025_2504.10068,
  title={ Mavors: Multi-granularity Video Representation for Multimodal Large Language Model },
  author={ Yang Shi and Jiaheng Liu and Yushuo Guan and Zhenhua Wu and Yuanxing Zhang and Zihao Wang and Weihong Lin and Jingyun Hua and Zekun Wang and Xinlong Chen and Bohan Zeng and Wentao Zhang and Fuzheng Zhang and Wenjing Yang and Di Zhang },
  journal={arXiv preprint arXiv:2504.10068},
  year={ 2025 }
}
Comments on this paper