ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16873
39
0

Classifier-guided CLIP Distillation for Unsupervised Multi-label Classification

21 March 2025
Dongseob Kim
Hyunjung Shim
    VLM
ArXivPDFHTML
Abstract

Multi-label classification is crucial for comprehensive image understanding, yet acquiring accurate annotations is challenging and costly. To address this, a recent study suggests exploiting unsupervised multi-label classification leveraging CLIP, a powerful vision-language model. Despite CLIP's proficiency, it suffers from view-dependent predictions and inherent bias, limiting its effectiveness. We propose a novel method that addresses these issues by leveraging multiple views near target objects, guided by Class Activation Mapping (CAM) of the classifier, and debiasing pseudo-labels derived from CLIP predictions. Our Classifier-guided CLIP Distillation (CCD) enables selecting multiple local views without extra labels and debiasing predictions to enhance classification performance. Experimental results validate our method's superiority over existing techniques across diverse datasets. The code is available atthis https URL.

View on arXiv
@article{kim2025_2503.16873,
  title={ Classifier-guided CLIP Distillation for Unsupervised Multi-label Classification },
  author={ Dongseob Kim and Hyunjung Shim },
  journal={arXiv preprint arXiv:2503.16873},
  year={ 2025 }
}
Comments on this paper