DeCLIP: Decoupled Prompting for CLIP-based Multi-Label Class-Incremental Learning
- CLLVLM
Multi-label class-incremental learning (MLCIL) continuously expands the label space while recognizing multiple co-occurring classes, making it prone to catastrophic forgetting and high false-positive rates (FPR). Extending CLIP to MLCIL is non-trivial because co-occurring categories violate CLIP's single image-text alignment paradigm and task-level partial labeling induces high FPR. We propose DeCLIP, a replay-free and parameter-efficient framework that decouples CLIP representations via a one-to-one class-specific prompting scheme. By assigning each category its own prompt space, DeCLIP prevents semantic confusion across labels and decouples multi-label images into per-class views compatible with CLIP pre-training. The learned prompts are preserved as knowledge anchors, mitigating catastrophic forgetting without replay. We further introduce Adaptive Similarity Tempering (AST), a task-aware strategy that suppresses FPR without dataset-specific tuning. Experiments on MS-COCO and PASCAL VOC show that DeCLIP consistently outperforms prior methods with minimal trainable parameters.
View on arXiv