End-to-end Video-Level Representation Learning for Action Recognition
In recent years, deep learning methods in action recognition have developed rapidly. However, works on learning the video-level representation are rare. Current methods either suffer from the confusion caused by partial observation, or without end-to-end training. In this paper, we build upon two-stream ConvNets and present DTPP (Deep networks with Temporal Pyramid Pooling), an end-to-end video-level representation learning approach, to address these problems. Specifically, at first, RGB frames and optical flow stacks are sparsely sampled across the whole video by segment based sampling. Then a temporal pyramid pooling layer is used to aggregate the frame-level features which consist of spatial and temporal cues across the whole video. Lastly, the trained model has compact video-level representation with multiple temporal scales, which is both global and sequence-aware. Experimental results show that DTPP achieves the state-of-the-art performance on two challenging video action datasets: UCF101 and HMDB51, either by ImageNet pre-training or Kinetics pre-training.
View on arXiv