Enabling object detectors to recognize out-of-distribution (OOD) objects is vital for building reliable systems. A primary obstacle stems from the fact that models frequently do not receive supervisory signals from unfamiliar data, leading to overly confident predictions regarding OOD objects. Despite previous progress that estimates OOD uncertainty based on the detection model and in-distribution (ID) samples, we explore using pre-trained vision-language representations for object-level OOD detection. We first discuss the limitations of applying image-level CLIP-based OOD detection methods to object-level scenarios. Building upon these insights, we propose RUNA, a novel framework that leverages a dual encoder architecture to capture rich contextual information and employs a regional uncertainty alignment mechanism to distinguish ID from OOD objects effectively. We introduce a few-shot fine-tuning approach that aligns region-level semantic representations to further improve the model's capability to discriminate between similar objects. Our experiments show that RUNA substantially surpasses state-of-the-art methods in object-level OOD detection, particularly in challenging scenarios with diverse and complex object instances.
View on arXiv@article{zhang2025_2503.22285, title={ RUNA: Object-level Out-of-Distribution Detection via Regional Uncertainty Alignment of Multimodal Representations }, author={ Bin Zhang and Jinggang Chen and Xiaoyang Qu and Guokuan Li and Kai Lu and Jiguang Wan and Jing Xiao and Jianzong Wang }, journal={arXiv preprint arXiv:2503.22285}, year={ 2025 } }