9
0

From Sight to Insight: Unleashing Eye-Tracking in Weakly Supervised Video Salient Object Detection

Qi Qin
Runmin Cong
Gen Zhan
Yiting Liao
Sam Kwong
Main:12 Pages
10 Figures
Bibliography:3 Pages
Abstract

The eye-tracking video saliency prediction (VSP) task and video salient object detection (VSOD) task both focus on the most attractive objects in video and show the result in the form of predictive heatmaps and pixel-level saliency masks, respectively. In practical applications, eye tracker annotations are more readily obtainable and align closely with the authentic visual patterns of human eyes. Therefore, this paper aims to introduce fixation information to assist the detection of video salient objects under weak supervision. On the one hand, we ponder how to better explore and utilize the information provided by fixation, and then propose a Position and Semantic Embedding (PSE) module to provide location and semantic guidance during the feature learning process. On the other hand, we achieve spatiotemporal feature modeling under weak supervision from the aspects of feature selection and feature contrast. A Semantics and Locality Query (SLQ) Competitor with semantic and locality constraints is designed to effectively select the most matching and accurate object query for spatiotemporal modeling. In addition, an Intra-Inter Mixed Contrastive (IIMC) model improves the spatiotemporal modeling capabilities under weak supervision by forming an intra-video and inter-video contrastive learning paradigm. Experimental results on five popular VSOD benchmarks indicate that our model outperforms other competitors on various evaluation metrics.

View on arXiv
@article{qin2025_2506.23519,
  title={ From Sight to Insight: Unleashing Eye-Tracking in Weakly Supervised Video Salient Object Detection },
  author={ Qi Qin and Runmin Cong and Gen Zhan and Yiting Liao and Sam Kwong },
  journal={arXiv preprint arXiv:2506.23519},
  year={ 2025 }
}
Comments on this paper