280
v1v2 (latest)

Interpretable Open-Vocabulary Referring Object Detection with Reverse Contrast Attention

Main:8 Pages
10 Figures
Bibliography:2 Pages
2 Tables
Appendix:4 Pages
Abstract

We propose Reverse Contrast Attention (RCA), a plug-in method that enhances object localization in vision-language transformers without retraining. RCA reweights final-layer attention by suppressing extremes and amplifying mid-level activations to let semantically relevant but subdued tokens guide predictions. We evaluate it on Open Vocabulary Referring Object Detection (OV-RefOD), introducing FitAP, a confidence-free average precision metric based on IoU and box area. RCA improves FitAP in 11 out of 15 open-source VLMs, with gains up to +26.6%+26.6\%. Effectiveness aligns with attention sharpness and fusion timing; while late-fusion models benefit consistently, models like DeepSeek-VL2\texttt{DeepSeek-VL2} also improve, pointing to capacity and disentanglement as key factors. RCA offers both interpretability and performance gains for multimodal transformers. Codes and dataset are available fromthis https URL

View on arXiv
Comments on this paper