ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17672
43
0

A Temporal Modeling Framework for Video Pre-Training on Video Instance Segmentation

22 March 2025
Qing Zhong
Peng-Tao Jiang
Wen Wang
Guodong Ding
Lin Wu
Kaiqi Huang
    VLM
ArXivPDFHTML
Abstract

Contemporary Video Instance Segmentation (VIS) methods typically adhere to a pre-train then fine-tune regime, where a segmentation model trained on images is fine-tuned on videos. However, the lack of temporal knowledge in the pre-trained model introduces a domain gap which may adversely affect the VIS performance. To effectively bridge this gap, we present a novel video pre-training approach to enhance VIS models, especially for videos with intricate instance relationships. Our crucial innovation focuses on reducing disparities between the pre-training and fine-tuning stages. Specifically, we first introduce consistent pseudo-video augmentations to create diverse pseudo-video samples for pre-training while maintaining the instance consistency across frames. Then, we incorporate a multi-scale temporal module to enhance the model's ability to model temporal relations through self- and cross-attention at short- and long-term temporal spans. Our approach does not set constraints on model architecture and can integrate seamlessly with various VIS methods. Experiment results on commonly adopted VIS benchmarks show that our method consistently outperforms state-of-the-art methods. Our approach achieves a notable 4.0% increase in average precision on the challenging OVIS dataset.

View on arXiv
@article{zhong2025_2503.17672,
  title={ A Temporal Modeling Framework for Video Pre-Training on Video Instance Segmentation },
  author={ Qing Zhong and Peng-Tao Jiang and Wen Wang and Guodong Ding and Lin Wu and Kaiqi Huang },
  journal={arXiv preprint arXiv:2503.17672},
  year={ 2025 }
}
Comments on this paper