55
10

Joint Modeling of Feature, Correspondence, and a Compressed Memory for Video Object Segmentation

Abstract

Current prevailing Video Object Segmentation methods follow the pipeline of extraction-then-matching, which first extracts features on current and reference frames independently, and then performs dense matching between them. This decoupled pipeline limits information propagation between frames to high-level features, hindering fine-grained details for matching. Furthermore, the pixel-wise matching lacks holistic target understanding, making it prone to disturbance by similar distractors. To address these issues, we propose a unified VOS framework, coined as JointFormer, for jointly modeling feature extraction, correspondence matching, and a compressed memory. The core Joint Modeling Block leverages attention to simultaneously extract and propagate the target information from the reference frame to the current frame and a compressed memory token. This joint scheme enables extensive multi-layer propagation beyond high-level feature space and facilitates robust instance-distinctive feature learning. To incorporate the long-term and holistic target information, we introduce a compressed memory token with a customized online updating mechanism, which aggregates target features and facilitates temporal information propagation in a frame-wise manner, enhancing global modeling consistency. Our JointFormer achieves a new state-of-the-art performance on the DAVIS 2017 val/test-dev (89.7\% and 87.6\%) benchmarks and the YouTube-VOS 2018/2019 val (87.0\% and 87.0\%) benchmarks, outperforming the existing works. To demonstrate the generalizability of our model, it is further evaluated on four new benchmarks with various difficulties, including MOSE for complex scenes, VISOR for egocentric videos, VOST for complex transformations, and LVOS for long-term videos.

View on arXiv
@article{zhang2025_2308.13505,
  title={ Joint Modeling of Feature, Correspondence, and a Compressed Memory for Video Object Segmentation },
  author={ Jiaming Zhang and Yutao Cui and Gangshan Wu and Limin Wang },
  journal={arXiv preprint arXiv:2308.13505},
  year={ 2025 }
}
Comments on this paper