Temporal Selective Max Pooling Towards Practical Face Recognition
- CVBM
In this report, we deal with two challenges when building a real-world face recognition system - the pose variation in uncontrolled environment and the computational expense of processing a video stream. First, we argue that the frame-wise feature mean is unable to characterize the variation among frames. We propose to preserve the overall pose diversity if we want the video feature to represent the subject identity. Then identity will be the only source of variation across videos since pose varies even within a single video. Following such an untangling variation idea, we present a pose-robust face verification algorithm with each video represented as a bag of frame-wise CNN features. Second, instead of simply using all the frames, we highlight the algorithm at the key frame selection. It is achieved by pose quantization using pose distances to K-means centroids, which reduces the number of feature vectors from hundreds to K while still preserving the overall diversity. The recognition is implemented with a rank-list of one-to-one similarities (i.e., verification) using the proposed video representation. On the official 5000 video-pairs of the YouTube Face dataset, our algorithm achieves a comparable performance with state-of-the-art that averages over deep features of all frames. Particularly, the proposed generic algorithm is verified on a public dataset and yet applicable in real-world systems.
View on arXiv