5
0

Weakly-supervised Contrastive Learning with Quantity Prompts for Moving Infrared Small Target Detection

Weiwei Duan
Luping Ji
Shengjia Chen
Sicheng Zhu
Jianghong Huang
Mao Ye
Main:10 Pages
12 Figures
Bibliography:3 Pages
Abstract

Different from general object detection, moving infrared small target detection faces huge challenges due to tiny target size and weak backgroundthis http URL, most existing methods are fully-supervised, heavily relying on a large number of manual target-wise annotations. However, manually annotating video sequences is often expensive and time-consuming, especially for low-quality infrared frame images. Inspired by general object detection, non-fully supervised strategies (e.g.e.g., weakly supervised) are believed to be potential in reducing annotation requirements. To break through traditional fully-supervised frameworks, as the first exploration work, this paper proposes a new weakly-supervised contrastive learning (WeCoL) scheme, only requires simple target quantity prompts during modelthis http URL, in our scheme, based on the pretrained segment anything model (SAM), a potential target mining strategy is designed to integrate target activation maps and multi-frame energythis http URL, contrastive learning is adopted to further improve the reliability of pseudo-labels, by calculating the similarity between positive and negative samples in featurethis http URL, we propose a long-short term motion-aware learning scheme to simultaneously model the local motion patterns and global motion trajectory of smallthis http URLextensive experiments on two public datasets (DAUB and ITSDT-15K) verify that our weakly-supervised scheme could often outperform early fully-supervised methods. Even, its performance could reach over 90\% of state-of-the-art (SOTA) fully-supervised ones.

View on arXiv
@article{duan2025_2507.02454,
  title={ Weakly-supervised Contrastive Learning with Quantity Prompts for Moving Infrared Small Target Detection },
  author={ Weiwei Duan and Luping Ji and Shengjia Chen and Sicheng Zhu and Jianghong Huang and Mao Ye },
  journal={arXiv preprint arXiv:2507.02454},
  year={ 2025 }
}
Comments on this paper