0

A2^2M2^2-Net: Adaptively Aligned Multi-Scale Moment for Few-Shot Action Recognition

Main:14 Pages
14 Figures
Bibliography:6 Pages
10 Tables
Appendix:7 Pages
Abstract

Thanks to capability to alleviate the cost of large-scale annotation, few-shot action recognition (FSAR) has attracted increased attention of researchers in recent years. Existing FSAR approaches typically neglect the role of individual motion pattern in comparison, and under-explore the feature statistics for video dynamics. Thereby, they struggle to handle the challenging temporal misalignment in video dynamics, particularly by using 2D backbones. To overcome these limitations, this work proposes an adaptively aligned multi-scale second-order moment network, namely A2^2M2^2-Net, to describe the latent video dynamics with a collection of powerful representation candidates and adaptively align them in an instance-guided manner. To this end, our A2^2M2^2-Net involves two core components, namely, adaptive alignment (A2^2 module) for matching, and multi-scale second-order moment (M2^2 block) for strong representation. Specifically, M2^2 block develops a collection of semantic second-order descriptors at multiple spatio-temporal scales. Furthermore, A2^2 module aims to adaptively select informative candidate descriptors while considering the individual motion pattern. By such means, our A2^2M2^2-Net is able to handle the challenging temporal misalignment problem by establishing an adaptive alignment protocol for strong representation. Notably, our proposed method generalizes well to various few-shot settings and diverse metrics. The experiments are conducted on five widely used FSAR benchmarks, and the results show our A2^2M2^2-Net achieves very competitive performance compared to state-of-the-arts, demonstrating its effectiveness and generalization.

View on arXiv
Comments on this paper