11
0

Task-Adapter++: Task-specific Adaptation with Order-aware Alignment for Few-shot Action Recognition

Abstract

Large-scale pre-trained models have achieved remarkable success in language and image tasks, leading an increasing number of studies to explore the application of pre-trained image models, such as CLIP, in the domain of few-shot action recognition (FSAR). However, current methods generally suffer from several problems: 1) Direct fine-tuning often undermines the generalization capability of the pre-trained model; 2) The exploration of task-specific information is insufficient in the visual tasks; 3) The semantic order information is typically overlooked during text modeling; 4) Existing cross-modal alignment techniques ignore the temporal coupling of multimodal information. To address these, we propose Task-Adapter++, a parameter-efficient dual adaptation method for both image and text encoders. Specifically, to make full use of the variations across different few-shot learning tasks, we design a task-specific adaptation for the image encoder so that the most discriminative information can be well noticed during feature extraction. Furthermore, we leverage large language models (LLMs) to generate detailed sequential sub-action descriptions for each action class, and introduce semantic order adapters into the text encoder to effectively model the sequential relationships between these sub-actions. Finally, we develop an innovative fine-grained cross-modal alignment strategy that actively maps visual features to reside in the same temporal stage as semantic descriptions. Extensive experiments fully demonstrate the effectiveness and superiority of the proposed method, which achieves state-of-the-art performance on 5 benchmarks consistently. The code is open-sourced atthis https URL.

View on arXiv
@article{cao2025_2505.06002,
  title={ Task-Adapter++: Task-specific Adaptation with Order-aware Alignment for Few-shot Action Recognition },
  author={ Congqi Cao and Peiheng Han and Yueran zhang and Yating Yu and Qinyi Lv and Lingtong Min and Yanning zhang },
  journal={arXiv preprint arXiv:2505.06002},
  year={ 2025 }
}
Comments on this paper