18
0

REEF: Relevance-Aware and Efficient LLM Adapter for Video Understanding

Abstract

Integrating vision models into large language models (LLMs) has sparked significant interest in creating vision-language foundation models, especially for video understanding. Recent methods often utilize memory banks to handle untrimmed videos for video-level understanding. However, they typically compress visual memory using similarity-based greedy approaches, which can overlook the contextual importance of individual tokens. To address this, we introduce an efficient LLM adapter designed for video-level understanding of untrimmed videos that prioritizes the contextual relevance of spatio-temporal tokens. Our framework leverages scorer networks to selectively compress the visual memory bank and filter spatial tokens based on relevance, using a differentiable Top-K operator for end-to-end training. Across three key video-level understanding tasks\unicodex2013\unicode{x2013} untrimmed video classification, video question answering, and video captioning\unicodex2013\unicode{x2013}our method achieves competitive or superior results on four large-scale datasets while reducing computational overhead by up to 34%. The code will be available soon on GitHub.

View on arXiv
@article{reza2025_2504.05491,
  title={ REEF: Relevance-Aware and Efficient LLM Adapter for Video Understanding },
  author={ Sakib Reza and Xiyun Song and Heather Yu and Zongfang Lin and Mohsen Moghaddam and Octavia Camps },
  journal={arXiv preprint arXiv:2504.05491},
  year={ 2025 }
}
Comments on this paper