Minimalistic Video Saliency Prediction via Efficient Decoder & Spatio Temporal Action Cues
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025
- ViT

Main:3 Pages
2 Figures
Bibliography:2 Pages
3 Tables
Abstract
This paper introduces ViNet-S, a 36MB model based on the ViNet architecture with a U-Net design, featuring a lightweight decoder that significantly reduces model size and parameters without compromising performance. Additionally, ViNet-A (148MB) incorporates spatio-temporal action localization (STAL) features, differing from traditional video saliency models that use action classification backbones. Our studies show that an ensemble of ViNet-S and ViNet-A, by averaging predicted saliency maps, achieves state-of-the-art performance on three visual-only and six audio-visual saliency datasets, outperforming transformer-based models in both parameter efficiency and real-time performance, with ViNet-S reaching over 1000fps.
View on arXivComments on this paper
