103
v1v2 (latest)

Multi-Context Temporal Consistent Modeling for Referring Video Object Segmentation

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025
Main:4 Pages
3 Figures
Bibliography:1 Pages
Abstract

Referring video object segmentation aims to segment objects within a video corresponding to a given text description. Existing transformer-based temporal modeling approaches face challenges related to query inconsistency and the limited consideration of context. Query inconsistency produces unstable masks of different objects in the middle of the video. The limited consideration of context leads to the segmentation of incorrect objects by failing to adequately account for the relationship between the given text and instances. To address these issues, we propose the Multi-context Temporal Consistency Module (MTCM), which consists of an Aligner and a Multi-Context Enhancer (MCE). The Aligner removes noise from queries and aligns them to achieve query consistency. The MCE predicts text-relevant queries by considering multi-context. We applied MTCM to four different models, increasing performance across all of them, particularly achieving 47.6 J&F on the MeViS. Code is available atthis https URL.

View on arXiv
Comments on this paper