ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00670
27
7

Transformer Based Self-Context Aware Prediction for Few-Shot Anomaly Detection in Videos

2 March 2025
Gargi V. Pillai
Ashish Verma
Debashis Sen
    ViT
ArXivPDFHTML
Abstract

Anomaly detection in videos is a challenging task as anomalies in different videos are of different kinds. Therefore, a promising way to approach video anomaly detection is by learning the non-anomalous nature of the video at hand. To this end, we propose a one-class few-shot learning driven transformer based approach for anomaly detection in videos that is self-context aware. Features from the first few consecutive non-anomalous frames in a video are used to train the transformer in predicting the non-anomalous feature of the subsequent frame. This takes place under the attention of a self-context learned from the input features themselves. After the learning, given a few previous frames, the video-specific transformer is used to infer if a frame is anomalous or not by comparing the feature predicted by it with the actual. The effectiveness of the proposed method with respect to the state-of-the-art is demonstrated through qualitative and quantitative results on different standard datasets. We also study the positive effect of the self-context used in our approach.

View on arXiv
@article{pillai2025_2503.00670,
  title={ Transformer Based Self-Context Aware Prediction for Few-Shot Anomaly Detection in Videos },
  author={ Gargi V. Pillai and Ashish Verma and Debashis Sen },
  journal={arXiv preprint arXiv:2503.00670},
  year={ 2025 }
}
Comments on this paper