ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18195
64
1

Multi-Perspective Data Augmentation for Few-shot Object Detection

25 February 2025
Anh-Khoa Nguyen Vu
Quoc-Truong Truong
Vinh-Tiep Nguyen
T. Ngo
Thanh-Toan Do
Tam V. Nguyen
ArXivPDFHTML
Abstract

Recent few-shot object detection (FSOD) methods have focused on augmenting synthetic samples for novel classes, show promising results to the rise of diffusion models. However, the diversity of such datasets is often limited in representativeness because they lack awareness of typical and hard samples, especially in the context of foreground and background relationships. To tackle this issue, we propose a Multi-Perspective Data Augmentation (MPAD) framework. In terms of foreground-foreground relationships, we propose in-context learning for object synthesis (ICOS) with bounding box adjustments to enhance the detail and spatial information of synthetic samples. Inspired by the large margin principle, support samples play a vital role in defining class boundaries. Therefore, we design a Harmonic Prompt Aggregation Scheduler (HPAS) to mix prompt embeddings at each time step of the generation process in diffusion models, producing hard novel samples. For foreground-background relationships, we introduce a Background Proposal method (BAP) to sample typical and hard backgrounds. Extensive experiments on multiple FSOD benchmarks demonstrate the effectiveness of our approach. Our framework significantly outperforms traditional methods, achieving an average increase of 17.5%17.5\%17.5% in nAP50 over the baseline on PASCAL VOC. Code is available atthis https URL.

View on arXiv
@article{vu2025_2502.18195,
  title={ Multi-Perspective Data Augmentation for Few-shot Object Detection },
  author={ Anh-Khoa Nguyen Vu and Quoc-Truong Truong and Vinh-Tiep Nguyen and Thanh Duc Ngo and Thanh-Toan Do and Tam V. Nguyen },
  journal={arXiv preprint arXiv:2502.18195},
  year={ 2025 }
}
Comments on this paper