151

VK-Det: Visual Knowledge Guided Prototype Learning for Open-Vocabulary Aerial Object Detection

Main:6 Pages
8 Figures
Bibliography:2 Pages
9 Tables
Appendix:6 Pages
Abstract

To identify objects beyond predefined categories, open-vocabulary aerial object detection (OVAD) leverages the zero-shot capabilities of visual-language models (VLMs) to generalize from base to novel categories. Existing approaches typically utilize self-learning mechanisms with weak text supervision to generate region-level pseudo-labels to align detectors with VLMs semantic spaces. However, text dependence induces semantic bias, restricting open-vocabulary expansion to text-specified concepts. We propose VK-Det\textbf{VK-Det}, a V\textbf{V}isual K\textbf{K}nowledge-guided open-vocabulary object Det\textbf{Det}ection framework without\textit{without} extra supervision. First, we discover and leverage vision encoder's inherent informative region perception to attain fine-grained localization and adaptive distillation. Second, we introduce a novel prototype-aware pseudo-labeling strategy. It models inter-class decision boundaries through feature clustering and maps detection regions to latent categories via prototype matching. This enhances attention to novel objects while compensating for missing supervision. Extensive experiments show state-of-the-art performance, achieving 30.1 mAPN\mathrm{mAP}^{N} on DIOR and 23.3 mAPN\mathrm{mAP}^{N} on DOTA, outperforming even extra supervised methods.

View on arXiv
Comments on this paper