Slow and Steady Feature Analysis: Higher Order Temporal Coherence in
Video
Learned image representations constitute the current state-of-the-art for visual recognition, yet they notoriously require large amounts of human-labeled data to learn effectively. Unlabeled video data has the potential to reduce this cost, if learning algorithms can exploit the frames' temporal coherence as a weak---but free---form of supervision. Existing methods perform "slow" feature analysis, encouraging the image representations of temporally close frames to exhibit only small differences. While this standard approach captures the fact that high-level visual signals change slowly over time, it fails to capture how the visual content changes. We propose to generalize slow feature analysis to "steady" feature analysis. The key idea is to impose a prior that higher order derivatives in the learned feature space must be small. To this end, we train a convolutional neural network with a regularizer that minimizes a contrastive loss on tuples of sequential frames from unlabeled video. Focusing on the case of triplets of frames, the proposed method encourages that feature changes over time should be smooth, i.e., similar to the most recent changes. Using five diverse image and video datasets, including unlabeled YouTube and KITTI videos, we demonstrate our method's impact on object recognition, scene classification, and action recognition tasks.
View on arXiv