ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.07760
36
4

Learning Spatial-Semantic Features for Robust Video Object Segmentation

10 July 2024
Xin Li
Deshui Miao
Zhenyu He
Y. Wang
Huchuan Lu
Ming Yang
    VOS
ArXivPDFHTML
Abstract

Tracking and segmenting multiple similar objects with distinct or complex parts in long-term videos is particularly challenging due to the ambiguity in identifying target components and the confusion caused by occlusion, background clutter, and changes in appearance or environment over time. In this paper, we propose a robust video object segmentation framework that learns spatial-semantic features and discriminative object queries to address the above issues. Specifically, we construct a spatial-semantic block comprising a semantic embedding component and a spatial dependency modeling part for associating global semantic features and local spatial features, providing a comprehensive target representation. In addition, we develop a masked cross-attention module to generate object queries that focus on the most discriminative parts of target objects during query propagation, alleviating noise accumulation to ensure effective long-term query propagation. Extensive experimental results show that the proposed method achieves state-of-the-art performance on benchmark data sets, including the DAVIS2017 test (\textbf{87.8\%}), YoutubeVOS 2019 (\textbf{88.1\%}), MOSE val (\textbf{74.0\%}), and LVOS test (\textbf{73.0\%}), and demonstrate the effectiveness and generalization capacity of our model. The source code and trained models are released at \href{this https URL}{this https URL}.

View on arXiv
@article{li2025_2407.07760,
  title={ Learning Spatial-Semantic Features for Robust Video Object Segmentation },
  author={ Xin Li and Deshui Miao and Zhenyu He and Yaowei Wang and Huchuan Lu and Ming-Hsuan Yang },
  journal={arXiv preprint arXiv:2407.07760},
  year={ 2025 }
}
Comments on this paper