Treating Motion as Option with Output Selection for Unsupervised Video Object Segmentation

Unsupervised video object segmentation aims to detect the most salient object in a video without any external guidance regarding the object. Salient objects often exhibit distinctive movements compared to the background, and recent methods leverage this by combining motion cues from optical flow maps with appearance cues from RGB images. However, because optical flow maps are often closely correlated with segmentation masks, networks can become overly dependent on motion cues during training, leading to vulnerability when faced with confusing motion cues and resulting in unstable predictions. To address this challenge, we propose a novel motion-as-option network that treats motion cues as an optional component rather than a necessity. During training, we randomly input RGB images into the motion encoder instead of optical flow maps, which implicitly reduces the network's reliance on motion cues. This design ensures that the motion encoder is capable of processing both RGB images and optical flow maps, leading to two distinct predictions depending on the type of input provided. To make the most of this flexibility, we introduce an adaptive output selection algorithm that determines the optimal prediction during testing.
View on arXiv@article{cho2025_2309.14786, title={ Treating Motion as Option with Output Selection for Unsupervised Video Object Segmentation }, author={ Suhwan Cho and Minhyeok Lee and Jungho Lee and MyeongAh Cho and Seungwook Park and Jaeyeob Kim and Hyunsung Jang and Sangyoun Lee }, journal={arXiv preprint arXiv:2309.14786}, year={ 2025 } }